Coder Social home page Coder Social logo

texasinstruments / edgeai-yolov5 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ultralytics/yolov5

622.0 10.0 120.0 26.25 MB

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite. Forked from https://ultralytics.com/yolov5

Home Page: https://github.com/TexasInstruments/edgeai

License: GNU General Public License v3.0

Dockerfile 0.79% Shell 3.86% Python 95.35%

edgeai-yolov5's Introduction

YOLOV5-ti-lite Object Detection Models

This repository is based on ultralytics/yolov5. As per the Official Readme file from Ultralytics, YOLOV5 is a family of object detectors with the following major differences from YOLOV3:

  • Darknet-csp backbone instead of vanilla Darknet. Reduces complexity by 30%.
  • PANet feature extractor instead of FPN.
  • Better box-decoding technique
  • Genetic algorithm based anchor-box selection.
  • Several new augmentation techniques. E.g. Mosaic augmentation

Official Models from Ultralytics

Dataset Model Name Input Size GFLOPS AP[0.5:0.95]% AP50% Notes
COCO Yolov5s6 1280x1280 69.6 43.3 61.9
COCO Yolov5s6_640 640x640 17.4 38.9 56.8 (Train@ 1280, val@640)
COCO Yolov5m6 1280x1280 209.6 50.5 68.7
COCO Yolov5m6_640 640x640 52.4 45.4 63.6 (Train@ 1280, val@640)
COCO Yolov5l6 1280x1280 470.8 53.4 71.1
COCO Yolov5l6_640 640x640 117.7 49.0 67.0 (Train@ 1280, val@640)

YOLOV5-ti-lite model definition

  • YOLOV5-ti-lite is a version of YOLOV5 from TI for efficient edge deployment. This naming convention is chosen to avoid conflict with future release of YOLOV5-lite models from Ultralytics.

  • Here is a brief description of changes that were made to get yolov5-ti-lite from yolov5:

    • YOLOV5 introduces a Focus layer as the very first layer of the network. This replaces the first few heavy convolution layers that are present in YOLOv3. It reduces the complexity of the n/w by 7% and training time by 15%. However, the slice operations in Focus layer are not embedded friendly and hence we replace it with a light-weight convolution layer. Here is a pictorial description of the changes from YOLOv3 to YOLOv5 to YOLOv5-ti-lite:

    • SiLU activation is not well-supported in embedded devices. it's not quantization friendly as well because of it's unbounded nature. This was observed for hSwish activation function while quantizing efficientnet. Hence, SiLU activation is replaced with ReLU.

    • SPP module with maxpool(k=13, s=1), maxpool(k=9,s=1) and maxpool(k=5,s=1) are replaced with various combinations of maxpool(k=3,s=1).Intention is to keep the receptive field and functionality same. This change will cause no difference to the model in floating-point.

      • maxpool(k=5, s=1) -> replaced with two maxpool(k=3,s=1)
      • maxpool(k=9, s=1) -> replaced with four maxpool(k=3,s=1)
      • maxpool(k=13, s=1)-> replaced with six maxpool(k=3,s=1) as shown below:

    • Variable size inference is replaced with fixed size inference as preferred by edge devices. E.g. tflite models are exported with a fixed i/p size.

Training and Testing

  • Training any model using this repo will take the above changes by default. Same commands as the official one can be used for training models from scartch. E.g.
    python train.py --data coco.yaml --cfg yolov5s6.yaml --weights '' --batch-size 64
                                           yolov5m6.yaml
    
  • Yolov5-l6-ti-lite model is finetuned for 100 epochs from the official ckpt. To replicate the results for yolov5-l6-ti-lite, download the official pre-trained weights for yolov5-l6 and set the lr to 1e-3 in hyp.scratch.yaml
    python train.py --data coco.yaml --cfg yolov5l6.yaml --weights 'yolov5l6.pt' --batch-size 40
    
  • Pretrained model checkpoints along with onnx and prototxt files are kept inside pretrained_models.
  • Run the following command to replicate the accuracy number on the pretrained checkpoints:
    python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65 --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt
                                                                                                  yolov5m6_640_ti_lite
                                                                                                  yolov5l6_640_ti_lite
    

Models trained by TI


Pre-trained Checkpoints

Dataset Model Name Input Size GFLOPS AP[0.5:0.95]% AP50% Notes
COCO Yolov5s6_ti_lite_640 640x640 17.48 37.4 56.0
COCO Yolov5s6_ti_lite_576 576x576 14.16 36.6 55.7 (Train@ 640, val@576)
COCO Yolov5s6_ti_lite_512 512x512 11.18 35.3 54.3 (Train@ 640, val@512)
COCO Yolov5s6_ti_lite_448 448x448 8.56 34.0 52.3 (Train@ 640, val@448)
COCO Yolov5s6_ti_lite_384 384x384 6.30 32.8 51.2 (Train@ 384, val@384)
COCO Yolov5s6_ti_lite_320 320x320 4.38 30.3 47.6 (Train@ 384, val@320)
COCO Yolov5m6_ti_lite_640 640x640 52.5 44.1 62.9
COCO Yolov5m6_ti_lite_576 576x576 42.52 43.0 61.9 (Train@ 640, val@576)
COCO Yolov5m6_ti_lite_512 512x512 32.16 42.0 60.5 (Train@ 640, val@512)
COCO Yolov5l6_ti_lite_640 640x640 117.84 47.1 65.6 This model is fintuned from the official ckpt for 100 epochs

There are three models in the pretrained_models. All other results are generated for these model on a different resolution. In order to generate the accuracy number at 512x512, run the following:

python test.py --data coco.yaml --img 512 --conf 0.001 --iou 0.65 --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt
                                                                                              yolov5m6_640_ti_lite
                                                                                              yolov5l6_640_ti_lite

ONNX export including detection:

  • Run the following command to export the entire models including the detection part,
    python export.py --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt  --img 640 --batch 1 --simplify --export-nms --opset 11 # export at 640x640 with batch size 1
  • Apart from exporting the complete ONNX model, above script will generate a prototxt file that contains information of the detection layer. This prototxt file is required to deploy the moodel on TI SoC.

References

[1] Official YOLOV5 repository
[2] yolov5-improvements-and-evaluation, Roboflow
[3] Focus layer in YOLOV5
[4] CrossStagePartial Network
[5] Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun-Wei Hsieh, and I-Hau Yeh. CSPNet: A new backbone that can enhance learning capability of cnn. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPR Workshop),2020.
[6]Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8759–8768, 2018
[7] Efficientnet-lite quantization
[8] [YOLOv5 Training video from Texas Instruments] (https://training.ti.com/process-efficient-object-detection-using-yolov5-and-tda4x-processors)

edgeai-yolov5's People

Contributors

aehogan avatar albinxavi avatar alexstoken avatar alexwang1900 avatar anon-artist avatar ayushexel avatar borda avatar cristifati avatar developer0hye avatar dlawrences avatar fcakyon avatar glenn-jocher avatar kinoute avatar laughing-q avatar lorenzomammana avatar lornatang avatar nanocode012 avatar olehb avatar ownmarc avatar skalskip avatar taoxiesz avatar thanhminhmr avatar tkianai avatar toretak avatar wanghaoyang0106 avatar yeric1789 avatar yxnong avatar zigars avatar zldrobit avatar zoujiu1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edgeai-yolov5's Issues

About GMACs reported in the YOLO-Pose paper

❔Question

Why the GMACs of other methods reported in this paper are doubled when compare with GFLOPs reported in original papers.

Usually, FLOPs may = MACs or = 2 * MACs when use different definition. I have not checked which one is reported in the original paper, but it won't be MACs / 2.

The weight in the “pretrained_models” cannot be downloaded

❔Question

In the "pretrained_models",unable to download “best.pt” through the link in the “best.pt.link”.
In the README,"pre trained_models" cannot jump to download weights.
Where do I get the weight to test? Thanks!

Additional context

OKSloss and keypoints detection results

❔Question

Hi, I'm trying to implement keypoint detection according to your paper and official yolov5-v6.1. But I have some problems and can't find what they are. Hope you can give me some advices. Thx~
Here are some observations.

  1. It seems that my 17 keypoint positions deform to the center of person bounding box when using OKSloss.
    Results are generated by the model trained with OKSloss. I implement the OKSloss based on COCOAPI.
    image
    image
    image

  2. At first I thought it was caused by bugs in building targets. But the results generated by the model trained with L1loss make me more confused. Even though the keypoint positions are not correct, but they are seperated from each other.
    image
    image
    image

  3. The keypoint position regression loss (both OKS and L1) is constant. But the keypoint visiablity loss decline looks normal.
    kpt->OKSloss, kvis->BCE
    image
    kpt->L1loss, kvis->BCE
    image

  4. I simply add a keypoint detection layer after the object detection layer in yolov5m network structure. I initialize the network with the pretrained yolov5m model and without freezing any layers. The object detection results look normal as shown below.
    Results with OKSloss-keypoints
    image
    Resutls with L1loss-keypoints
    image
    I also tried add an extra head for keypoint detection but the results are similar.

coco_kpts.yaml ?

❔Question

coco_kpts.yaml ?

Additional context

hello author, where is coco_kpts.yaml ?

Could you provide config files?

Hi, thanks for the great work. I cannot find config yamls like yolov5s6_kpts_ti_lite.yaml and yolov5s6_kpts.yaml from repo and links in README. models/yolov5s.yaml doesn't declare P6, so I assume you have different settings.

Could you provide those yaml? Thank you.

Some of the key points are severely shifted

Hi,@debapriyamaji I got this result after retraining the coco dataset with the pre-trained model, and only these two key points will have a very serious offset, I think there could be some relationship with sigmas, I had a similar problem when training my own data, hope I can get your answer, thanks!
000000002473
image

Specific keypoint out of the boundingbox range

After running the demo code with test pictures, there often exist specific keypoints which is out of the boundingbox range. I see some other people raising the similar questions but no exact solution is given until now.

YOLO-POSE code

❔Question

Hi, I have read your paper: "YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object
Keypoint Similarity Loss"
, and I was fascinated by the potential of your idea. Your paper is pointing at this repo, but I cannot seem to find any material related to keypoint detection. All I see is your code related to object detection.

Are you still working on this? Or does the code exist in another branch?

yolov5s compiled by TIDL fault

When I export a yolov5s.pt by export.py and compile the yolov5s.onnx by TIDL onnxrt_ep.py with yolov5s.prototxt, a Segmentation fault will raise. Show as below.
image
However, the onnx is not exported by export.py, for example, downloaded from yolov5s6_640_ti_lite_37p4_56p0.onnx, then the TIDL will compile it well, WHY??

As the Debug flow, this bug is report from ↓ in onnxrt_ep.py in edgeai-tidl-toolsexamples/osrt_python/ort

 EP_list = ['TIDLExecutionProvider','CPUExecutionProvider']
 sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so) 

No file named 'test.py'

❔Question

Thanks for sharing your great work.
There is no file named 'test.py'.

Additional context

Can I use 'detect.py' instead?

Some issue: Convert the onnx model to tda4 platform( * . bin)

❔Question

Hi , thanks for your nice job! I now have some problems with model transformation when I Convert the onnx model to tda4 platform( * . bin). I hope to get your advice,thanks a lot.

Additional context

TIDL Meta PipeLine (Proto) File : ../../test/testvecs/config/import/public/onnx/mlh_yolov4/yolov5.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:309] Error parsing text-format tidl_meta_arch.TIDLMetaArch: 54:5: Unknown enumeration value of "CODE_TYPE_YOLO_V5" for field "code_type".
ERROR: google::protobuf::TextFormat::Parse proto file(../../test/testvecs/config/import/public/onnx/mlh_yolov4/yolov5.prototxt) FAILED !!!

Details like follows:

image

Where is the pt file that can be run on TDA4

❔Question

I have try these two model
https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/yolov5s6_640_ti_lite/weights/yolov5s6_640_ti_lite_37p4_56p0.onnx
https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/yolov5s6_640_ti_lite/weights/best.pt

Convert best.pt to best.onnx using the command below: python export.py --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt --img 640 --batch 1 --simplify --export-nms --opset 11

and when I used the file ./out/tidl_model_import.out to compile yolov5s6_640_ti_lite_37p4_56p0.onnx and best.onnx to bin file, I found yolov5s6_640_ti_lite_37p4_56p0.onnx can be complied successfully and best.onnx was failed. The picture below is the error log.

prototxt:https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/yolov5s6_640_ti_lite/weights/yolov5s6_640_ti_lite_metaarch.prototxt

My question is, if I want to train my datasheet, and run the model on TDA4, then I need the pt file that can be export to onnx, and the onnx file can be complied to bin file successfully. Where is the pt file located?

image

Retrain yolo-pose on custom dataset

Thanks for your great work!

I'm trying to retrain yolo-pose on my custom dataset with different keypoints num and same format, and I may further extend some segmentation module based on your work.

I first reproduce it on coco dataset and obtain reasonable inference result on my custom dataset. After that, I modified all nkpt setting I could find from 17(coco) to 12(custom) and training pipeline works. But I found the result is so terrible and the keypoint loss was always around 0.3 throughout 300 epochs. I tried overfitting in a single-image dataset but kpts loss is still high.

Could you please give some tips about how to fix this problem or some potential modification I may missed?

my modified list:
compute_loss, create_targets in loss.py
cache_labels in datasets.py
Detect.forword in yolo.py
nkpt in hub/model.yaml
plots.py

hope for your reply @debapriyamaji

Question about the human keypoint prediction on edgeai-yolox repository.

❔Question

Do edgeai-yolox repository and edgeai-yolov5 repository have the close effect on human keypoint prediction?

Additional context

I am interested in the detection of human keypoints with edgeai-yolox repository. And edgeai-yolox repository can't commit issues.
I saw the edgeai-yolox repository did have human-pose code, however the edgeai-yolov5 repository seems to be the latest version.

results are error

hi, I run your code, but error results was got. I used these following models to run detect.py
image

But I got the following results, where was wrong?
image
image

inference result error

❔Question

hi,thanks for you release the code,I run detect.py test my data,I find the keypoint of the distant person's head is far from the detection,I use Yolov5s6_pose_960.pt input size set 960,1080,1280.

Additional context

like this ,input size set 960
posetest

when I set 1080, more keypoint of head far from the detection
posetest_1080

How to load the weights from yolo-pose for inference

❔Question

Hi there

I'm completely new to PyTorch and YOLO, so I assume this is a very dummy question.

I try to load the model yolo-pose weights with this code:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
weight = torch.load('C:\Pose\yolov5s.pt')
model.load_state_dict(weight["model"].state_dict())

The weights are downloaded from: https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose/weights

But I got this error:

Traceback (most recent call last):
File "C:\MVP\yolov5\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 3, in <cell line: 3>
model.load_state_dict(weight["model"].state_dict())
File "C:\MVP\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoShape:
Missing key(s) in state_dict: "model.model.model.0.conv.weight", "model.model.model.0.conv.bias", "model.model.model.1.conv.weight", "model.model.model.1.conv.bias", "model.model.model.2.cv1.conv.weight", "model.model.model.2.cv1.conv.bias", "model.model.model.2.cv2.conv.weight", "model.model.model.2.cv2.conv.bias", "model.model.model.2.cv3.conv.weight", "model.model.model.2.cv3.conv.bias", "model.model.model.2.m.0.cv1.conv.weight", "model.model.model.2.m.0.cv1.conv.bias", "model.model.model.2.m.0.cv2.conv.weight", "model.model.model.2.m.0.cv2.conv.bias", "model.model.model.3.conv.weight", "model.model.model.3.conv.bias", "model.model.model.4.cv1.conv.weight", "model.model.model.4.cv1.conv.bias", "model.model.model.4.cv2.conv.weight", "model.model.model.4.cv2.conv.bias", "model.model.model.4.cv3.conv.weight", "model.model.model.4.cv3.conv.bias", "model.model.model.4.m.0.cv1.conv.weight", "model.model.model.4.m.0.cv1.conv.bias", "model.model.model.4.m.0.cv2.conv.weight", "model.model.model.4.m.0.cv2.conv.bias", "model.model.model.4.m.1.cv1.conv.weight", "model.model.model.4.m.1.cv1.conv.bias", "model.model.model.4.m.1.cv2.conv.weight", "model.model.model.4.m.1.cv2.conv.bias", "model.model.model.5.conv.weight", "model.model.model.5.conv.bias", "model.model.model.6.cv1.conv.weight", "model.model.model.6.cv1.conv.bias", "model.model.model.6.cv2.conv.weight", "model.model.model.6.cv2.conv.bias", "model.model.model.6.cv3.conv.weight", "model.model.model.6.cv3.conv.bias", "model.model.model.6.m.0.cv1.conv.weight", "model.model.model.6.m.0.cv1.conv.bias", "model.model.model.6.m.0.cv2.conv.weight", "model.model.model.6.m.0.cv2.conv.bias", "model.model.model.6.m.1.cv1.conv.weight", "model.model.model.6.m.1.cv1.conv.bias", "model.model.model.6.m.1.cv2.conv.weight", "model.model.model.6.m.1.cv2.conv.bias", "model.model.model.6.m.2.cv1.conv.weight", "model.model.model.6.m.2.cv1.conv.bias", "model.model.model.6.m.2.cv2.conv.weight", "model.model.model.6.m.2.cv2.conv.bias", "model.model.model.7.conv.weight", "model.model.model.7.conv.bias", "model.model.model.8.cv1.conv.weight", "model.model.model.8.cv1.conv.bias", "model.model.model.8.cv2.conv.weight", "model.model.model.8.cv2.conv.bias", "model.model.model.8.cv3.conv.weight", "model.model.model.8.cv3.conv.bias", "model.model.model.8.m.0.cv1.conv.weight", "model.model.model.8.m.0.cv1.conv.bias", "model.model.model.8.m.0.cv2.conv.weight", "model.model.model.8.m.0.cv2.conv.bias", "model.model.model.9.cv1.conv.weight", "model.model.model.9.cv1.conv.bias", "model.model.model.9.cv2.conv.weight", "model.model.model.9.cv2.conv.bias", "model.model.model.10.conv.weight", "model.model.model.10.conv.bias", "model.model.model.13.cv1.conv.weight", "model.model.model.13.cv1.conv.bias", "model.model.model.13.cv2.conv.weight", "model.model.model.13.cv2.conv.bias", "model.model.model.13.cv3.conv.weight", "model.model.model.13.cv3.conv.bias", "model.model.model.13.m.0.cv1.conv.weight", "model.model.model.13.m.0.cv1.conv.bias", "model.model.model.13.m.0.cv2.conv.weight", "model.model.model.13.m.0.cv2.conv.bias", "model.model.model.14.conv.weight", "model.model.model.14.conv.bias", "model.model.model.17.cv1.conv.weight", "model.model.model.17.cv1.conv.bias", "model.model.model.17.cv2.conv.weight", "model.model.model.17.cv2.conv.bias", "model.model.model.17.cv3.conv.weight", "model.model.model.17.cv3.conv.bias", "model.model.model.17.m.0.cv1.conv.weight", "model.model.model.17.m.0.cv1.conv.bias", "model.model.model.17.m.0.cv2.conv.weight", "model.model.model.17.m.0.cv2.conv.bias", "model.model.model.18.conv.weight", "model.model.model.18.conv.bias", "model.model.model.20.cv1.conv.weight", "model.model.model.20.cv1.conv.bias", "model.model.model.20.cv2.conv.weight", "model.model.model.20.cv2.conv.bias", "model.model.model.20.cv3.conv.weight", "model.model.model.20.cv3.conv.bias", "model.model.model.20.m.0.cv1.conv.weight", "model.model.model.20.m.0.cv1.conv.bias", "model.model.model.20.m.0.cv2.conv.weight", "model.model.model.20.m.0.cv2.conv.bias", "model.model.model.21.conv.weight", "model.model.model.21.conv.bias", "model.model.model.23.cv1.conv.weight", "model.model.model.23.cv1.conv.bias", "model.model.model.23.cv2.conv.weight", "model.model.model.23.cv2.conv.bias", "model.model.model.23.cv3.conv.weight", "model.model.model.23.cv3.conv.bias", "model.model.model.23.m.0.cv1.conv.weight", "model.model.model.23.m.0.cv1.conv.bias", "model.model.model.23.m.0.cv2.conv.weight", "model.model.model.23.m.0.cv2.conv.bias", "model.model.model.24.anchors", "model.model.model.24.m.0.weight", "model.model.model.24.m.0.bias", "model.model.model.24.m.1.weight", "model.model.model.24.m.1.bias", "model.model.model.24.m.2.weight", "model.model.model.24.m.2.bias".
Unexpected key(s) in state_dict: "model.0.conv.weight", "model.0.bn.weight", "model.0.bn.bias", "model.0.bn.running_mean", "model.0.bn.running_var", "model.0.bn.num_batches_tracked", "model.1.conv.weight", "model.1.bn.weight", "model.1.bn.bias", "model.1.bn.running_mean", "model.1.bn.running_var", "model.1.bn.num_batches_tracked", "model.2.cv1.conv.weight", "model.2.cv1.bn.weight", "model.2.cv1.bn.bias", "model.2.cv1.bn.running_mean", "model.2.cv1.bn.running_var", "model.2.cv1.bn.num_batches_tracked", "model.2.cv2.conv.weight", "model.2.cv2.bn.weight", "model.2.cv2.bn.bias", "model.2.cv2.bn.running_mean", "model.2.cv2.bn.running_var", "model.2.cv2.bn.num_batches_tracked", "model.2.cv3.conv.weight", "model.2.cv3.bn.weight", "model.2.cv3.bn.bias", "model.2.cv3.bn.running_mean", "model.2.cv3.bn.running_var", "model.2.cv3.bn.num_batches_tracked", "model.2.m.0.cv1.conv.weight", "model.2.m.0.cv1.bn.weight", "model.2.m.0.cv1.bn.bias", "model.2.m.0.cv1.bn.running_mean", "model.2.m.0.cv1.bn.running_var", "model.2.m.0.cv1.bn.num_batches_tracked", "model.2.m.0.cv2.conv.weight", "model.2.m.0.cv2.bn.weight", "model.2.m.0.cv2.bn.bias", "model.2.m.0.cv2.bn.running_mean", "model.2.m.0.cv2.bn.running_var", "model.2.m.0.cv2.bn.num_batches_tracked", "model.3.conv.weight", "model.3.bn.weight", "model.3.bn.bias", "model.3.bn.running_mean", "model.3.bn.running_var", "model.3.bn.num_batches_tracked", "model.4.cv1.conv.weight", "model.4.cv1.bn.weight", "model.4.cv1.bn.bias", "model.4.cv1.bn.running_mean", "model.4.cv1.bn.running_var", "model.4.cv1.bn.num_batches_tracked", "model.4.cv2.conv.weight", "model.4.cv2.bn.weight", "model.4.cv2.bn.bias", "model.4.cv2.bn.running_mean", "model.4.cv2.bn.running_var", "model.4.cv2.bn.num_batches_tracked", "model.4.cv3.conv.weight", "model.4.cv3.bn.weight", "model.4.cv3.bn.bias", "model.4.cv3.bn.running_mean", "model.4.cv3.bn.running_var", "model.4.cv3.bn.num_batches_tracked", "model.4.m.0.cv1.conv.weight", "model.4.m.0.cv1.bn.weight", "model.4.m.0.cv1.bn.bias", "model.4.m.0.cv1.bn.running_mean", "model.4.m.0.cv1.bn.running_var", "model.4.m.0.cv1.bn.num_batches_tracked", "model.4.m.0.cv2.conv.weight", "model.4.m.0.cv2.bn.weight", "model.4.m.0.cv2.bn.bias", "model.4.m.0.cv2.bn.running_mean", "model.4.m.0.cv2.bn.running_var", "model.4.m.0.cv2.bn.num_batches_tracked", "model.4.m.1.cv1.conv.weight", "model.4.m.1.cv1.bn.weight", "model.4.m.1.cv1.bn.bias", "model.4.m.1.cv1.bn.running_mean", "model.4.m.1.cv1.bn.running_var", "model.4.m.1.cv1.bn.num_batches_tracked", "model.4.m.1.cv2.conv.weight", "model.4.m.1.cv2.bn.weight", "model.4.m.1.cv2.bn.bias", "model.4.m.1.cv2.bn.running_mean", "model.4.m.1.cv2.bn.running_var", "model.4.m.1.cv2.bn.num_batches_tracked", "model.5.conv.weight", "model.5.bn.weight", "model.5.bn.bias", "model.5.bn.running_mean", "model.5.bn.running_var", "model.5.bn.num_batches_tracked", "model.6.cv1.conv.weight", "model.6.cv1.bn.weight", "model.6.cv1.bn.bias", "model.6.cv1.bn.running_mean", "model.6.cv1.bn.running_var", "model.6.cv1.bn.num_batches_tracked", "model.6.cv2.conv.weight", "model.6.cv2.bn.weight", "model.6.cv2.bn.bias", "model.6.cv2.bn.running_mean", "model.6.cv2.bn.running_var", "model.6.cv2.bn.num_batches_tracked", "model.6.cv3.conv.weight", "model.6.cv3.bn.weight", "model.6.cv3.bn.bias", "model.6.cv3.bn.running_mean", "model.6.cv3.bn.running_var", "model.6.cv3.bn.num_batches_tracked", "model.6.m.0.cv1.conv.weight", "model.6.m.0.cv1.bn.weight", "model.6.m.0.cv1.bn.bias", "model.6.m.0.cv1.bn.running_mean", "model.6.m.0.cv1.bn.running_var", "model.6.m.0.cv1.bn.num_batches_tracked", "model.6.m.0.cv2.conv.weight", "model.6.m.0.cv2.bn.weight", "model.6.m.0.cv2.bn.bias", "model.6.m.0.cv2.bn.running_mean", "model.6.m.0.cv2.bn.running_var", "model.6.m.0.cv2.bn.num_batches_tracked", "model.6.m.1.cv1.conv.weight", "model.6.m.1.cv1.bn.weight", "model.6.m.1.cv1.bn.bias", "model.6.m.1.cv1.bn.running_mean", "model.6.m.1.cv1.bn.running_var", "model.6.m.1.cv1.bn.num_batches_tracked", "model.6.m.1.cv2.conv.weight", "model.6.m.1.cv2.bn.weight", "model.6.m.1.cv2.bn.bias", "model.6.m.1.cv2.bn.running_mean", "model.6.m.1.cv2.bn.running_var", "model.6.m.1.cv2.bn.num_batches_tracked", "model.6.m.2.cv1.conv.weight", "model.6.m.2.cv1.bn.weight", "model.6.m.2.cv1.bn.bias", "model.6.m.2.cv1.bn.running_mean", "model.6.m.2.cv1.bn.running_var", "model.6.m.2.cv1.bn.num_batches_tracked", "model.6.m.2.cv2.conv.weight", "model.6.m.2.cv2.bn.weight", "model.6.m.2.cv2.bn.bias", "model.6.m.2.cv2.bn.running_mean", "model.6.m.2.cv2.bn.running_var", "model.6.m.2.cv2.bn.num_batches_tracked", "model.7.conv.weight", "model.7.bn.weight", "model.7.bn.bias", "model.7.bn.running_mean", "model.7.bn.running_var", "model.7.bn.num_batches_tracked", "model.8.cv1.conv.weight", "model.8.cv1.bn.weight", "model.8.cv1.bn.bias", "model.8.cv1.bn.running_mean", "model.8.cv1.bn.running_var", "model.8.cv1.bn.num_batches_tracked", "model.8.cv2.conv.weight", "model.8.cv2.bn.weight", "model.8.cv2.bn.bias", "model.8.cv2.bn.running_mean", "model.8.cv2.bn.running_var", "model.8.cv2.bn.num_batches_tracked", "model.8.cv3.conv.weight", "model.8.cv3.bn.weight", "model.8.cv3.bn.bias", "model.8.cv3.bn.running_mean", "model.8.cv3.bn.running_var", "model.8.cv3.bn.num_batches_tracked", "model.8.m.0.cv1.conv.weight", "model.8.m.0.cv1.bn.weight", "model.8.m.0.cv1.bn.bias", "model.8.m.0.cv1.bn.running_mean", "model.8.m.0.cv1.bn.running_var", "model.8.m.0.cv1.bn.num_batches_tracked", "model.8.m.0.cv2.conv.weight", "model.8.m.0.cv2.bn.weight", "model.8.m.0.cv2.bn.bias", "model.8.m.0.cv2.bn.running_mean", "model.8.m.0.cv2.bn.running_var", "model.8.m.0.cv2.bn.num_batches_tracked", "model.9.cv1.conv.weight", "model.9.cv1.bn.weight", "model.9.cv1.bn.bias", "model.9.cv1.bn.running_mean", "model.9.cv1.bn.running_var", "model.9.cv1.bn.num_batches_tracked", "model.9.cv2.conv.weight", "model.9.cv2.bn.weight", "model.9.cv2.bn.bias", "model.9.cv2.bn.running_mean", "model.9.cv2.bn.running_var", "model.9.cv2.bn.num_batches_tracked", "model.10.conv.weight", "model.10.bn.weight", "model.10.bn.bias", "model.10.bn.running_mean", "model.10.bn.running_var", "model.10.bn.num_batches_tracked", "model.13.cv1.conv.weight", "model.13.cv1.bn.weight", "model.13.cv1.bn.bias", "model.13.cv1.bn.running_mean", "model.13.cv1.bn.running_var", "model.13.cv1.bn.num_batches_tracked", "model.13.cv2.conv.weight", "model.13.cv2.bn.weight", "model.13.cv2.bn.bias", "model.13.cv2.bn.running_mean", "model.13.cv2.bn.running_var", "model.13.cv2.bn.num_batches_tracked", "model.13.cv3.conv.weight", "model.13.cv3.bn.weight", "model.13.cv3.bn.bias", "model.13.cv3.bn.running_mean", "model.13.cv3.bn.running_var", "model.13.cv3.bn.num_batches_tracked", "model.13.m.0.cv1.conv.weight", "model.13.m.0.cv1.bn.weight", "model.13.m.0.cv1.bn.bias", "model.13.m.0.cv1.bn.running_mean", "model.13.m.0.cv1.bn.running_var", "model.13.m.0.cv1.bn.num_batches_tracked", "model.13.m.0.cv2.conv.weight", "model.13.m.0.cv2.bn.weight", "model.13.m.0.cv2.bn.bias", "model.13.m.0.cv2.bn.running_mean", "model.13.m.0.cv2.bn.running_var", "model.13.m.0.cv2.bn.num_batches_tracked", "model.14.conv.weight", "model.14.bn.weight", "model.14.bn.bias", "model.14.bn.running_mean", "model.14.bn.running_var", "model.14.bn.num_batches_tracked", "model.17.cv1.conv.weight", "model.17.cv1.bn.weight", "model.17.cv1.bn.bias", "model.17.cv1.bn.running_mean", "model.17.cv1.bn.running_var", "model.17.cv1.bn.num_batches_tracked", "model.17.cv2.conv.weight", "model.17.cv2.bn.weight", "model.17.cv2.bn.bias", "model.17.cv2.bn.running_mean", "model.17.cv2.bn.running_var", "model.17.cv2.bn.num_batches_tracked", "model.17.cv3.conv.weight", "model.17.cv3.bn.weight", "model.17.cv3.bn.bias", "model.17.cv3.bn.running_mean", "model.17.cv3.bn.running_var", "model.17.cv3.bn.num_batches_tracked", "model.17.m.0.cv1.conv.weight", "model.17.m.0.cv1.bn.weight", "model.17.m.0.cv1.bn.bias", "model.17.m.0.cv1.bn.running_mean", "model.17.m.0.cv1.bn.running_var", "model.17.m.0.cv1.bn.num_batches_tracked", "model.17.m.0.cv2.conv.weight", "model.17.m.0.cv2.bn.weight", "model.17.m.0.cv2.bn.bias", "model.17.m.0.cv2.bn.running_mean", "model.17.m.0.cv2.bn.running_var", "model.17.m.0.cv2.bn.num_batches_tracked", "model.18.conv.weight", "model.18.bn.weight", "model.18.bn.bias", "model.18.bn.running_mean", "model.18.bn.running_var", "model.18.bn.num_batches_tracked", "model.20.cv1.conv.weight", "model.20.cv1.bn.weight", "model.20.cv1.bn.bias", "model.20.cv1.bn.running_mean", "model.20.cv1.bn.running_var", "model.20.cv1.bn.num_batches_tracked", "model.20.cv2.conv.weight", "model.20.cv2.bn.weight", "model.20.cv2.bn.bias", "model.20.cv2.bn.running_mean", "model.20.cv2.bn.running_var", "model.20.cv2.bn.num_batches_tracked", "model.20.cv3.conv.weight", "model.20.cv3.bn.weight", "model.20.cv3.bn.bias", "model.20.cv3.bn.running_mean", "model.20.cv3.bn.running_var", "model.20.cv3.bn.num_batches_tracked", "model.20.m.0.cv1.conv.weight", "model.20.m.0.cv1.bn.weight", "model.20.m.0.cv1.bn.bias", "model.20.m.0.cv1.bn.running_mean", "model.20.m.0.cv1.bn.running_var", "model.20.m.0.cv1.bn.num_batches_tracked", "model.20.m.0.cv2.conv.weight", "model.20.m.0.cv2.bn.weight", "model.20.m.0.cv2.bn.bias", "model.20.m.0.cv2.bn.running_mean", "model.20.m.0.cv2.bn.running_var", "model.20.m.0.cv2.bn.num_batches_tracked", "model.21.conv.weight", "model.21.bn.weight", "model.21.bn.bias", "model.21.bn.running_mean", "model.21.bn.running_var", "model.21.bn.num_batches_tracked", "model.23.cv1.conv.weight", "model.23.cv1.bn.weight", "model.23.cv1.bn.bias", "model.23.cv1.bn.running_mean", "model.23.cv1.bn.running_var", "model.23.cv1.bn.num_batches_tracked", "model.23.cv2.conv.weight", "model.23.cv2.bn.weight", "model.23.cv2.bn.bias", "model.23.cv2.bn.running_mean", "model.23.cv2.bn.running_var", "model.23.cv2.bn.num_batches_tracked", "model.23.cv3.conv.weight", "model.23.cv3.bn.weight", "model.23.cv3.bn.bias", "model.23.cv3.bn.running_mean", "model.23.cv3.bn.running_var", "model.23.cv3.bn.num_batches_tracked", "model.23.m.0.cv1.conv.weight", "model.23.m.0.cv1.bn.weight", "model.23.m.0.cv1.bn.bias", "model.23.m.0.cv1.bn.running_mean", "model.23.m.0.cv1.bn.running_var", "model.23.m.0.cv1.bn.num_batches_tracked", "model.23.m.0.cv2.conv.weight", "model.23.m.0.cv2.bn.weight", "model.23.m.0.cv2.bn.bias", "model.23.m.0.cv2.bn.running_mean", "model.23.m.0.cv2.bn.running_var", "model.23.m.0.cv2.bn.num_batches_tracked", "model.24.anchors", "model.24.m.0.weight", "model.24.m.0.bias", "model.24.m.1.weight", "model.24.m.1.bias", "model.24.m.2.weight", "model.24.m.2.bias".

Can someone tell me what I'm missing?

Something wrong when I run the export.py~

(py36) E:\pytorch\edgeai-yolov5-yolo-pose>python export.py --weights weights/Yolov5s6_pose_640_ti_lite.pt  --img 960 --batch 1 --simplify --export-nms
Namespace(batch_size=1, device='0', dynamic=False, export_nms=True, grid=False, img_size=[960, 960], simplify=True, weights='weights/Yolov5s6_pose_640_ti_lite.pt')
YOLOv5  2022-6-6 torch 1.8.1+cu101 CUDA:0 (NVIDIA GeForce GTX 950M, 2048.0MB)

Fusing layers...
Model Summary: 297 layers, 12349624 parameters, 0 gradients, 16.8 GFLOPS
Traceback (most recent call last):
  File "export.py", line 79, in <module>
    print(nms_export(y))
  File "D:\Anaconda\conda\envs\py36\Lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "E:\pytorch\edgeai-yolov5-yolo-pose\models\common.py", line 275, in forward
    return non_max_suppression_export(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes, kpt_label=self.kpt_label)
  File "E:\pytorch\edgeai-yolov5-yolo-pose\utils\general.py", line 606, in non_max_suppression_export
    conf, j = cls_conf.max(1, keepdim=True)
RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

And this is the result of nms(y):
[tensor([], device='cuda:0', size=(0, 6))]
The other one is num_export(y):
tensor([], device='cuda:0', size=(0, 0))
Aha, maybe something went wrong~

onnx model inference results is poor!!!

❔Question

Hi, @mathmanu @kumardesappan ,thanks for your nice work, i met some issue when i inference the model (yolov5.onnx) Converted from yolov5.pt . What are the possible reasons ?thanks!

Additional context

1653029576(1)

![image](https://user-images.githubusercontent.com/30424943/169470534-0b64400d-6d6e-4524-bce5-87c21e6a1619.png) looking forward to your answer。^-^

TypeError: 'float' object is not subscriptable

Traceback (most recent call last):
File "detect.py", line 238, in
main(opt)
File "detect.py", line 233, in main
run(**vars(opt))
File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 141, in run
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
File "/opt/edgeai-yolov5/utils/general.py", line 471, in scale_coords
coords[:, [0, 2]] /= gain[1]
TypeError: 'float' object is not subscriptable

May you help to solve this error?
I'm not sure if it can be solved by this way in the ./utils/general.py:

coords[:, [0, 2]] /= gain[1]
coords[:, [1, 3]] /= gain[0]

coords[:, [0, 2]] /= gain

coords[:, [1, 3]] /= gain

About Yolo-pose modification

Hello, thank you very much for your code, I have some questions to ask you, if I train the task of four key points, after modifying the code, the test effect on the training set is very poor, but I check the code and there is no problem, may I ask this What is the cause。Looking forward to your answer。

When training the dataset, Nan appears

When training the dataset, Nan appears

When I started trying to train the COCO2017 dataset, first it was normal, but then there was Nan, which I thought might be the cause of the exploding gradient, but I didn't make any changes to the code. This question has been bothering me for a long time and I look forward to your suggestions.

~%BTTIS7CBF REIC_SK$F@2
38A49470C2786F8A255E172DBDCE40CD_750_750

Clarification on the Data format

❔Question

1.) Im quite stuck on the exact format required for the data set as I have been getting this error: "AssertionError: train: No labels in coco_kpts/train2017.cache. Can not train without labels." My current format is:

  • coc_kpts
    • images
      - train
      image1.jpg....
      -val
      image1.jpg....
    • labels
      - train2017
      image1.txt.... (modelled exactly after the labels that are available for download)
      - val2017
      image1.txt.... (modelled exactly after the labels that are available for download)
    • train2017.txt (list of images ~ ./images/train/image1.jpg)
    • val2017.txt (list of images ~ ./images/val/image1.jpg)

Any chance you could release your coc_kpts directory? It would be very straight forward to replicate after that.

2.) I am a bit confused on the required pre-trained model. Are you just taking a yolov5 model trained to only detect people and then using it to then train the key point model? It appears the pre-trained person detector models you have provided have been modified in some capacity? Wondering as I would like to use yolo-pose on a different dataset (i.e. keypoints but not on humans)

Additional context

error: run python export.py

Traceback (most recent call last): File "export.py", line 71, in <module> y = model(img) # dry runs File "/home/dong/.conda/envs/yolo-pose/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/dong/Code/yolo-pose/edgeai-yolov5-yolo-pose/models/yolo.py", line 157, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/dong/Code/yolo-pose/edgeai-yolov5-yolo-pose/models/yolo.py", line 188, in forward_once x = m(x) # run File "/home/dong/.conda/envs/yolo-pose/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/dong/.conda/envs/yolo-pose/lib/python3.8/site-packages/torch/nn/modules/upsampling.py", line 154, in forward recompute_scale_factor=self.recompute_scale_factor) File "/home/dong/.conda/envs/yolo-pose/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

error: run python yolo_pose_onnx_inference.py

Traceback (most recent call last):
File "yolo_pose_onnx_inference.py", line 123, in
main()
File "yolo_pose_onnx_inference.py", line 118, in main
model_inference_image_list(model_path=args.model_path, img_path=args.img_path,
File "yolo_pose_onnx_inference.py", line 70, in model_inference_image_list
post_process(img_file, dst_file, output[0], score_threshold=0.3)
File "yolo_pose_onnx_inference.py", line 89, in post_process
img = cv2.rectangle(img, (int(det_bbox[0]), int(det_bbox[1])), ((det_bbox[2]), (det_bbox[3])), color_map[::-1], 2)
cv2.error: OpenCV(4.5.5) 👎 error: (-5:Bad argument) in function 'rectangle'

Overload resolution failed:

  • Can't parse 'pt2'. Sequence item with index 0 has a wrong type
  • Can't parse 'pt2'. Sequence item with index 0 has a wrong type
  • Can't parse 'rec'. Expected sequence length 4, got 2
  • Can't parse 'rec'. Expected sequence length 4, got 2

Deploying YoloPose with TensorRT

❔Question

I tried deploying YoloPose on Nvidia Jetson Xavier NX using TensorRT, but I got very low fps. The average processing time per frame is about 160ms. Can you give me some advice or guidance for performance optimization? Thank you.

Additional context

I used trtexec for onnx-tensorrt model conversion and the output is as follows. The model seems to have a fairly long inference time
微信截图_20220615100249

Processing time including pre-processing, inference and post-processing
微信截图_20220614183404

My operating Environment
微信截图_20220614183603
微信截图_20220615100411

Here is the model I used
yolov5s6_pose_640_ti_lite.zip

Export onnx model settings: export_nms=true

I am not sure about the export.py file,
nms = NMS(conf=0.001)
nms_export = NMS_Export(conf=0.001)
y_export = nms_export(y)
y = nms(y)
assert (torch.sum(torch.abs(y_export[0]-y[0]))<1e-6)
What should I do if y_export is inconsistent with y?
Could you help check this case? Thanks.

problems of decreasing accuracy of model quasi transformation(yolov5.onnx --> yolov5.bin )

❔Question

 Hi , @mathmanu @kumardesappan ,thanks for your nice work!   I encountered some problems when I was converting the model,(yolov5s .onnx  --->  yolov5s.bin   input_size[576x960]).  
 I use the [tidl_model_import.out(version:8.0)] to do this 。Before the quantification of the model, the position of the target box is very accurate (as shown in Figure 1 below). After the quantification, the position of the visualized target box becomes worse (as shown in Figure 2 below). I would like to ask for possible reasons. Thanks  a lot!

Additional context

[Running TIDL in PC emulation mode to collect Activations range for each layer]
picture 1: [Running TIDL in PC emulation mode to collect Activations range for each layer]
image
[*************** Calibration iteration number 0 completed **********************]
picture 2 :
image

Convert TXT screenshot:
image
Looking forward to your reply. ^-^

Is the test.py file deleted?

❔Question

Additional context

I want to try to test the model accuracy, but I can't find the test.py mentioned in the readme

About retraining the model on the coco data set, there is a problem with the prediction chart after training

❔Question

Additional context

I tried yolov5s6_ pose_ 640_ ti_lite The weight of Lite was fine tuned on the coco data set. I added a larger rotation to the data, and the rotation angle was between [-180-180]. I didn't make any other adjustments except for that. I found that after training, the model seemed to be not friendly to the left eye or left ear, and its key points always shifted a lot. Such a problem also occurred in the picture of reasoning after training. What is this problem? Has the author encountered it? Looking forward to your reply.

Converting yolov5s6_640_ti_lite_37p4_56p0.onnx got many unsupported layers

❔Question

When I convertting downloaded yolov5s6_640_ti_lite_37p4_56p0.onnx, I got many unsupported layers and ops.
But from your converted artifacts(od-8100_onnxrt_weights_yolov5s6_640_ti_lite_37p4_56p0_onnx), seemingly all the layers are supported by TIDL. Why?
Besides, I think my TIDL version is up to date()

Additional context

tidl_tensor_bits = 8
debug_level = 3
num_tidl_subgraphs = 16
tidl_denylist =
tidl_calibration_accuracy_level = 64
tidl_calibration_options:num_frames_calibration = 1
tidl_calibration_options:bias_calibration_iterations = 3
power_of_2_quantization = 2
enable_high_resolution_optimization = 0
pre_batchnorm_fold = 1
add_data_convert_ops = 0
output_feature_16bit_names_list =
m_params_16bit_names_list =
reserved_compile_constraints_flag = 1601
ti_internal_reserved_1 =
Parsing ONNX Model
model_proto 0x7fffb5d3f9f0

WARNING : 'meta_layers_names_list' is not provided - running OD post processing in ARM mode

TIDL Meta PipeLine (Proto) File :

Number of OD backbone nodes = 0
Size of odBackboneNodeIds = 0
Layer 0 -- layer name -- Conv_0
Input dims size = 4 dims --- 1 3 640 640
Supported TIDL layer type --- Conv -- Conv_0
Layer 1 -- layer name -- Relu_1
Input dims size = 4 dims --- 1 12 320 320
Supported TIDL layer type --- Relu -- Relu_1
Layer 2 -- layer name -- Conv_2
Input dims size = 4 dims --- 1 12 320 320
Supported TIDL layer type --- Conv -- Conv_2
Layer 3 -- layer name -- Relu_3
Input dims size = 4 dims --- 1 32 320 320
Supported TIDL layer type --- Relu -- Relu_3
Layer 4 -- layer name -- Conv_4
Input dims size = 4 dims --- 1 32 320 320
Supported TIDL layer type --- Conv -- Conv_4
Layer 5 -- layer name -- Relu_5
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Relu -- Relu_5
Layer 6 -- layer name -- Conv_13
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Conv -- Conv_13
Layer 7 -- layer name -- Relu_14
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Relu -- Relu_14
Layer 8 -- layer name -- Conv_6
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Conv -- Conv_6
Layer 9 -- layer name -- Relu_7
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Relu -- Relu_7
Layer 10 -- layer name -- Conv_8
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Conv -- Conv_8
Layer 11 -- layer name -- Relu_9
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Relu -- Relu_9
Layer 12 -- layer name -- Conv_10
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Conv -- Conv_10
Layer 13 -- layer name -- Relu_11
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Relu -- Relu_11
Layer 14 -- layer name -- Add_12
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Add -- Add_12
Layer 15 -- layer name -- Concat_15
Input dims size = 4 dims --- 1 32 160 160
Supported TIDL layer type --- Concat -- Concat_15
Layer 16 -- layer name -- Conv_16
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Conv -- Conv_16
Layer 17 -- layer name -- Relu_17
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Relu -- Relu_17
Layer 18 -- layer name -- Conv_18
Input dims size = 4 dims --- 1 64 160 160
Supported TIDL layer type --- Conv -- Conv_18
Layer 19 -- layer name -- Relu_19
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Relu -- Relu_19
Layer 20 -- layer name -- Conv_37
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_37
Layer 21 -- layer name -- Relu_38
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_38
Layer 22 -- layer name -- Conv_20
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_20
Layer 23 -- layer name -- Relu_21
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_21
Layer 24 -- layer name -- Conv_22
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_22
Layer 25 -- layer name -- Relu_23
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_23
Layer 26 -- layer name -- Conv_24
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_24
Layer 27 -- layer name -- Relu_25
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_25
Layer 28 -- layer name -- Add_26
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Add -- Add_26
Layer 29 -- layer name -- Conv_27
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_27
Layer 30 -- layer name -- Relu_28
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_28
Layer 31 -- layer name -- Conv_29
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_29
Layer 32 -- layer name -- Relu_30
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_30
Layer 33 -- layer name -- Add_31
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Add -- Add_31
Layer 34 -- layer name -- Conv_32
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_32
Layer 35 -- layer name -- Relu_33
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_33
Layer 36 -- layer name -- Conv_34
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_34
Layer 37 -- layer name -- Relu_35
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_35
Layer 38 -- layer name -- Add_36
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Add -- Add_36
Layer 39 -- layer name -- Concat_39
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Concat -- Concat_39
Layer 40 -- layer name -- Conv_40
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_40
Layer 41 -- layer name -- Relu_41
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Relu -- Relu_41
Layer 42 -- layer name -- Conv_42
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_42
Layer 43 -- layer name -- Relu_43
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Relu -- Relu_43
Layer 44 -- layer name -- Conv_61
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_61
Layer 45 -- layer name -- Relu_62
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_62
Layer 46 -- layer name -- Conv_44
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_44
Layer 47 -- layer name -- Relu_45
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_45
Layer 48 -- layer name -- Conv_46
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_46
Layer 49 -- layer name -- Relu_47
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_47
Layer 50 -- layer name -- Conv_48
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_48
Layer 51 -- layer name -- Relu_49
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_49
Layer 52 -- layer name -- Add_50
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Add -- Add_50
Layer 53 -- layer name -- Conv_51
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_51
Layer 54 -- layer name -- Relu_52
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_52
Layer 55 -- layer name -- Conv_53
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_53
Layer 56 -- layer name -- Relu_54
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_54
Layer 57 -- layer name -- Add_55
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Add -- Add_55
Layer 58 -- layer name -- Conv_56
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_56
Layer 59 -- layer name -- Relu_57
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_57
Layer 60 -- layer name -- Conv_58
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_58
Layer 61 -- layer name -- Relu_59
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_59
Layer 62 -- layer name -- Add_60
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Add -- Add_60
Layer 63 -- layer name -- Concat_63
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Concat -- Concat_63
Layer 64 -- layer name -- Conv_64
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_64
Layer 65 -- layer name -- Relu_65
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Relu -- Relu_65
Layer 66 -- layer name -- Conv_66
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_66
Layer 67 -- layer name -- Relu_67
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Relu -- Relu_67
Layer 68 -- layer name -- Conv_75
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_75
Layer 69 -- layer name -- Relu_76
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_76
Layer 70 -- layer name -- Conv_68
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_68
Layer 71 -- layer name -- Relu_69
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_69
Layer 72 -- layer name -- Conv_70
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_70
Layer 73 -- layer name -- Relu_71
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_71
Layer 74 -- layer name -- Conv_72
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_72
Layer 75 -- layer name -- Relu_73
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_73
Layer 76 -- layer name -- Add_74
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Add -- Add_74
Layer 77 -- layer name -- Concat_77
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Concat -- Concat_77
Layer 78 -- layer name -- Conv_78
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_78
Layer 79 -- layer name -- Relu_79
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Relu -- Relu_79
Layer 80 -- layer name -- Conv_80
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_80
Layer 81 -- layer name -- Relu_81
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Relu -- Relu_81
Layer 82 -- layer name -- Conv_82
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_82
Layer 83 -- layer name -- Relu_83
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_83
Layer 84 -- layer name -- MaxPool_87
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- MaxPool -- MaxPool_87
Layer 85 -- layer name -- MaxPool_88
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- MaxPool -- MaxPool_88
Layer 86 -- layer name -- MaxPool_89
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- MaxPool -- MaxPool_89
Layer 87 -- layer name -- Concat_90
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Concat -- Concat_90
Layer 88 -- layer name -- Conv_91
Input dims size = 4 dims --- 1 1024 10 10
Supported TIDL layer type --- Conv -- Conv_91
Layer 89 -- layer name -- Relu_92
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Relu -- Relu_92
Layer 90 -- layer name -- Conv_99
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_99
Layer 91 -- layer name -- Relu_100
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_100
Layer 92 -- layer name -- Conv_93
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_93
Layer 93 -- layer name -- Relu_94
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_94
Layer 94 -- layer name -- Conv_95
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Conv -- Conv_95
Layer 95 -- layer name -- Relu_96
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_96
Layer 96 -- layer name -- Conv_97
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Conv -- Conv_97
Layer 97 -- layer name -- Relu_98
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_98
Layer 98 -- layer name -- Concat_101
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Concat -- Concat_101
Layer 99 -- layer name -- Conv_102
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_102
Layer 100 -- layer name -- Relu_103
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Relu -- Relu_103
Layer 101 -- layer name -- Conv_104
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_104
Layer 102 -- layer name -- Relu_105
Input dims size = 4 dims --- 1 384 10 10
Supported TIDL layer type --- Relu -- Relu_105
Layer 103 -- layer name -- Resize_107
Input dims size = 4 dims --- 1 384 10 10
Supported TIDL layer type --- Resize -- Resize_107
Layer 104 -- layer name -- Concat_108
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Concat -- Concat_108
Layer 105 -- layer name -- Conv_115
Input dims size = 4 dims --- 1 768 20 20
Supported TIDL layer type --- Conv -- Conv_115
Layer 106 -- layer name -- Relu_116
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_116
Layer 107 -- layer name -- Conv_109
Input dims size = 4 dims --- 1 768 20 20
Supported TIDL layer type --- Conv -- Conv_109
Layer 108 -- layer name -- Relu_110
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_110
Layer 109 -- layer name -- Conv_111
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_111
Layer 110 -- layer name -- Relu_112
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_112
Layer 111 -- layer name -- Conv_113
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_113
Layer 112 -- layer name -- Relu_114
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_114
Layer 113 -- layer name -- Concat_117
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Concat -- Concat_117
Layer 114 -- layer name -- Conv_118
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_118
Layer 115 -- layer name -- Relu_119
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Relu -- Relu_119
Layer 116 -- layer name -- Conv_120
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_120
Layer 117 -- layer name -- Relu_121
Input dims size = 4 dims --- 1 256 20 20
Supported TIDL layer type --- Relu -- Relu_121
Layer 118 -- layer name -- Resize_123
Input dims size = 4 dims --- 1 256 20 20
Supported TIDL layer type --- Resize -- Resize_123
Layer 119 -- layer name -- Concat_124
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Concat -- Concat_124
Layer 120 -- layer name -- Conv_131
Input dims size = 4 dims --- 1 512 40 40
Supported TIDL layer type --- Conv -- Conv_131
Layer 121 -- layer name -- Relu_132
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_132
Layer 122 -- layer name -- Conv_125
Input dims size = 4 dims --- 1 512 40 40
Supported TIDL layer type --- Conv -- Conv_125
Layer 123 -- layer name -- Relu_126
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_126
Layer 124 -- layer name -- Conv_127
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_127
Layer 125 -- layer name -- Relu_128
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_128
Layer 126 -- layer name -- Conv_129
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_129
Layer 127 -- layer name -- Relu_130
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_130
Layer 128 -- layer name -- Concat_133
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Concat -- Concat_133
Layer 129 -- layer name -- Conv_134
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_134
Layer 130 -- layer name -- Relu_135
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Relu -- Relu_135
Layer 131 -- layer name -- Conv_136
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_136
Layer 132 -- layer name -- Relu_137
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_137
Layer 133 -- layer name -- Resize_139
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Resize -- Resize_139
Layer 134 -- layer name -- Concat_140
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Concat -- Concat_140
Layer 135 -- layer name -- Conv_147
Input dims size = 4 dims --- 1 256 80 80
Supported TIDL layer type --- Conv -- Conv_147
Layer 136 -- layer name -- Relu_148
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_148
Layer 137 -- layer name -- Conv_141
Input dims size = 4 dims --- 1 256 80 80
Supported TIDL layer type --- Conv -- Conv_141
Layer 138 -- layer name -- Relu_142
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_142
Layer 139 -- layer name -- Conv_143
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_143
Layer 140 -- layer name -- Relu_144
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_144
Layer 141 -- layer name -- Conv_145
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Conv -- Conv_145
Layer 142 -- layer name -- Relu_146
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Relu -- Relu_146
Layer 143 -- layer name -- Concat_149
Input dims size = 4 dims --- 1 64 80 80
Supported TIDL layer type --- Concat -- Concat_149
Layer 144 -- layer name -- Conv_150
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_150
Layer 145 -- layer name -- Relu_151
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Relu -- Relu_151
Layer 146 -- layer name -- Conv_152
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_152
Layer 147 -- layer name -- Relu_153
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_153
Layer 148 -- layer name -- Concat_154
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Concat -- Concat_154
Layer 149 -- layer name -- Conv_161
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_161
Layer 150 -- layer name -- Relu_162
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_162
Layer 151 -- layer name -- Conv_155
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_155
Layer 152 -- layer name -- Relu_156
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_156
Layer 153 -- layer name -- Conv_157
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_157
Layer 154 -- layer name -- Relu_158
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_158
Layer 155 -- layer name -- Conv_159
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Conv -- Conv_159
Layer 156 -- layer name -- Relu_160
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Relu -- Relu_160
Layer 157 -- layer name -- Concat_163
Input dims size = 4 dims --- 1 128 40 40
Supported TIDL layer type --- Concat -- Concat_163
Layer 158 -- layer name -- Conv_164
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_164
Layer 159 -- layer name -- Relu_165
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Relu -- Relu_165
Layer 160 -- layer name -- Conv_166
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_166
Layer 161 -- layer name -- Relu_167
Input dims size = 4 dims --- 1 256 20 20
Supported TIDL layer type --- Relu -- Relu_167
Layer 162 -- layer name -- Concat_168
Input dims size = 4 dims --- 1 256 20 20
Supported TIDL layer type --- Concat -- Concat_168
Layer 163 -- layer name -- Conv_175
Input dims size = 4 dims --- 1 512 20 20
Supported TIDL layer type --- Conv -- Conv_175
Layer 164 -- layer name -- Relu_176
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_176
Layer 165 -- layer name -- Conv_169
Input dims size = 4 dims --- 1 512 20 20
Supported TIDL layer type --- Conv -- Conv_169
Layer 166 -- layer name -- Relu_170
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_170
Layer 167 -- layer name -- Conv_171
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_171
Layer 168 -- layer name -- Relu_172
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_172
Layer 169 -- layer name -- Conv_173
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Conv -- Conv_173
Layer 170 -- layer name -- Relu_174
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Relu -- Relu_174
Layer 171 -- layer name -- Concat_177
Input dims size = 4 dims --- 1 192 20 20
Supported TIDL layer type --- Concat -- Concat_177
Layer 172 -- layer name -- Conv_178
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_178
Layer 173 -- layer name -- Relu_179
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Relu -- Relu_179
Layer 174 -- layer name -- Conv_180
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_180
Layer 175 -- layer name -- Relu_181
Input dims size = 4 dims --- 1 384 10 10
Supported TIDL layer type --- Relu -- Relu_181
Layer 176 -- layer name -- Concat_182
Input dims size = 4 dims --- 1 384 10 10
Supported TIDL layer type --- Concat -- Concat_182
Layer 177 -- layer name -- Conv_189
Input dims size = 4 dims --- 1 768 10 10
Supported TIDL layer type --- Conv -- Conv_189
Layer 178 -- layer name -- Relu_190
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_190
Layer 179 -- layer name -- Conv_183
Input dims size = 4 dims --- 1 768 10 10
Supported TIDL layer type --- Conv -- Conv_183
Layer 180 -- layer name -- Relu_184
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_184
Layer 181 -- layer name -- Conv_185
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Conv -- Conv_185
Layer 182 -- layer name -- Relu_186
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_186
Layer 183 -- layer name -- Conv_187
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Conv -- Conv_187
Layer 184 -- layer name -- Relu_188
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Relu -- Relu_188
Layer 185 -- layer name -- Concat_191
Input dims size = 4 dims --- 1 256 10 10
Supported TIDL layer type --- Concat -- Concat_191
Layer 186 -- layer name -- Conv_192
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_192
Layer 187 -- layer name -- Relu_193
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Relu -- Relu_193
Layer 188 -- layer name -- Conv_1082
Input dims size = 4 dims --- 1 512 10 10
Supported TIDL layer type --- Conv -- Conv_1082
Layer 189 -- layer name -- Reshape_1096
Input dims size = 4 dims --- 1 255 10 10
Unsupported (TIDL check) TIDL layer type --- Reshape
Layer 190 -- layer name -- Transpose_1097
Input dims size = 5 dims --- 1 3 85 10 10
Layer 190 --- op type - Transpose, Number of input dims 5 != 4 .. not supported by TIDL
Layer 191 -- layer name -- Sigmoid_1098
Input dims size = 5 dims --- 1 3 10 10 85
Layer 191 --- op type - Sigmoid, Number of input dims 5 != 4 .. not supported by TIDL
Layer 192 -- layer name -- Slice_1103
Input dims size = 5 dims --- 1 3 10 10 85
Layer 192 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 193 -- layer name -- Mul_1105
Input dims size = 5 dims --- 1 3 10 10 2
Layer 193 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 194 -- layer name -- Sub_1107
Input dims size = 5 dims --- 1 3 10 10 2
Layer 194 --- op type - Sub, Number of input dims 5 != 4 .. not supported by TIDL
Layer 195 -- layer name -- Add_1109
Input dims size = 5 dims --- 1 3 10 10 2
Layer 195 --- op type - Add, Number of input dims 5 != 4 .. not supported by TIDL
Layer 196 -- layer name -- Mul_1111
Input dims size = 5 dims --- 1 3 10 10 2
Layer 196 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 197 -- layer name -- Reshape_1118
Input dims size = 5 dims --- 1 3 10 10 2
Layer 197 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 198 -- layer name -- Expand_1127
Input dims size = 4 dims --- 3 10 10 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 199 -- layer name -- Reshape_1237
Input dims size = 5 dims --- 1 3 10 10 2
Layer 199 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 200 -- layer name -- ScatterND_1238
Input dims size = 5 dims --- 1 3 10 10 85
Layer 200 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 201 -- layer name -- Slice_1243
Input dims size = 5 dims --- 1 3 10 10 85
Layer 201 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 202 -- layer name -- Mul_1245
Input dims size = 5 dims --- 1 3 10 10 2
Layer 202 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 203 -- layer name -- Pow_1246
Input dims size = 5 dims --- 1 3 10 10 2
Layer 203 --- op type - Pow, Number of input dims 5 != 4 .. not supported by TIDL
Layer 204 -- layer name -- Mul_1247
Input dims size = 5 dims --- 1 3 10 10 2
Layer 204 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 205 -- layer name -- Reshape_1254
Input dims size = 6 dims --- 1 1 3 10 10 2
Layer 205 --- op type - Reshape, Number of input dims 6 != 4 .. not supported by TIDL
Layer 206 -- layer name -- Expand_1263
Input dims size = 4 dims --- 3 10 10 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 207 -- layer name -- Reshape_1373
Input dims size = 5 dims --- 1 3 10 10 2
Layer 207 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 208 -- layer name -- ScatterND_1374
Input dims size = 5 dims --- 1 3 10 10 85
Layer 208 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 209 -- layer name -- Reshape_1377
Input dims size = 5 dims --- 1 3 10 10 85
Layer 209 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 210 -- layer name -- Conv_786
Input dims size = 4 dims --- 1 384 20 20
Supported TIDL layer type --- Conv -- Conv_786
Layer 211 -- layer name -- Reshape_800
Input dims size = 4 dims --- 1 255 20 20
Unsupported (TIDL check) TIDL layer type --- Reshape
Layer 212 -- layer name -- Transpose_801
Input dims size = 5 dims --- 1 3 85 20 20
Layer 212 --- op type - Transpose, Number of input dims 5 != 4 .. not supported by TIDL
Layer 213 -- layer name -- Sigmoid_802
Input dims size = 5 dims --- 1 3 20 20 85
Layer 213 --- op type - Sigmoid, Number of input dims 5 != 4 .. not supported by TIDL
Layer 214 -- layer name -- Slice_807
Input dims size = 5 dims --- 1 3 20 20 85
Layer 214 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 215 -- layer name -- Mul_809
Input dims size = 5 dims --- 1 3 20 20 2
Layer 215 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 216 -- layer name -- Sub_811
Input dims size = 5 dims --- 1 3 20 20 2
Layer 216 --- op type - Sub, Number of input dims 5 != 4 .. not supported by TIDL
Layer 217 -- layer name -- Add_813
Input dims size = 5 dims --- 1 3 20 20 2
Layer 217 --- op type - Add, Number of input dims 5 != 4 .. not supported by TIDL
Layer 218 -- layer name -- Mul_815
Input dims size = 5 dims --- 1 3 20 20 2
Layer 218 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 219 -- layer name -- Reshape_822
Input dims size = 5 dims --- 1 3 20 20 2
Layer 219 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 220 -- layer name -- Expand_831
Input dims size = 4 dims --- 3 20 20 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 221 -- layer name -- Reshape_941
Input dims size = 5 dims --- 1 3 20 20 2
Layer 221 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 222 -- layer name -- ScatterND_942
Input dims size = 5 dims --- 1 3 20 20 85
Layer 222 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 223 -- layer name -- Slice_947
Input dims size = 5 dims --- 1 3 20 20 85
Layer 223 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 224 -- layer name -- Mul_949
Input dims size = 5 dims --- 1 3 20 20 2
Layer 224 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 225 -- layer name -- Pow_950
Input dims size = 5 dims --- 1 3 20 20 2
Layer 225 --- op type - Pow, Number of input dims 5 != 4 .. not supported by TIDL
Layer 226 -- layer name -- Mul_951
Input dims size = 5 dims --- 1 3 20 20 2
Layer 226 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 227 -- layer name -- Reshape_958
Input dims size = 6 dims --- 1 1 3 20 20 2
Layer 227 --- op type - Reshape, Number of input dims 6 != 4 .. not supported by TIDL
Layer 228 -- layer name -- Expand_967
Input dims size = 4 dims --- 3 20 20 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 229 -- layer name -- Reshape_1077
Input dims size = 5 dims --- 1 3 20 20 2
Layer 229 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 230 -- layer name -- ScatterND_1078
Input dims size = 5 dims --- 1 3 20 20 85
Layer 230 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 231 -- layer name -- Reshape_1081
Input dims size = 5 dims --- 1 3 20 20 85
Layer 231 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 232 -- layer name -- Conv_490
Input dims size = 4 dims --- 1 256 40 40
Supported TIDL layer type --- Conv -- Conv_490
Layer 233 -- layer name -- Reshape_504
Input dims size = 4 dims --- 1 255 40 40
Unsupported (TIDL check) TIDL layer type --- Reshape
Layer 234 -- layer name -- Transpose_505
Input dims size = 5 dims --- 1 3 85 40 40
Layer 234 --- op type - Transpose, Number of input dims 5 != 4 .. not supported by TIDL
Layer 235 -- layer name -- Sigmoid_506
Input dims size = 5 dims --- 1 3 40 40 85
Layer 235 --- op type - Sigmoid, Number of input dims 5 != 4 .. not supported by TIDL
Layer 236 -- layer name -- Slice_511
Input dims size = 5 dims --- 1 3 40 40 85
Layer 236 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 237 -- layer name -- Mul_513
Input dims size = 5 dims --- 1 3 40 40 2
Layer 237 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 238 -- layer name -- Sub_515
Input dims size = 5 dims --- 1 3 40 40 2
Layer 238 --- op type - Sub, Number of input dims 5 != 4 .. not supported by TIDL
Layer 239 -- layer name -- Add_517
Input dims size = 5 dims --- 1 3 40 40 2
Layer 239 --- op type - Add, Number of input dims 5 != 4 .. not supported by TIDL
Layer 240 -- layer name -- Mul_519
Input dims size = 5 dims --- 1 3 40 40 2
Layer 240 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 241 -- layer name -- Reshape_526
Input dims size = 5 dims --- 1 3 40 40 2
Layer 241 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 242 -- layer name -- Expand_535
Input dims size = 4 dims --- 3 40 40 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 243 -- layer name -- Reshape_645
Input dims size = 5 dims --- 1 3 40 40 2
Layer 243 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 244 -- layer name -- ScatterND_646
Input dims size = 5 dims --- 1 3 40 40 85
Layer 244 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 245 -- layer name -- Slice_651
Input dims size = 5 dims --- 1 3 40 40 85
Layer 245 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 246 -- layer name -- Mul_653
Input dims size = 5 dims --- 1 3 40 40 2
Layer 246 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 247 -- layer name -- Pow_654
Input dims size = 5 dims --- 1 3 40 40 2
Layer 247 --- op type - Pow, Number of input dims 5 != 4 .. not supported by TIDL
Layer 248 -- layer name -- Mul_655
Input dims size = 5 dims --- 1 3 40 40 2
Layer 248 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 249 -- layer name -- Reshape_662
Input dims size = 6 dims --- 1 1 3 40 40 2
Layer 249 --- op type - Reshape, Number of input dims 6 != 4 .. not supported by TIDL
Layer 250 -- layer name -- Expand_671
Input dims size = 4 dims --- 3 40 40 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 251 -- layer name -- Reshape_781
Input dims size = 5 dims --- 1 3 40 40 2
Layer 251 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 252 -- layer name -- ScatterND_782
Input dims size = 5 dims --- 1 3 40 40 85
Layer 252 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 253 -- layer name -- Reshape_785
Input dims size = 5 dims --- 1 3 40 40 85
Layer 253 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 254 -- layer name -- Conv_194
Input dims size = 4 dims --- 1 128 80 80
Supported TIDL layer type --- Conv -- Conv_194
Layer 255 -- layer name -- Reshape_208
Input dims size = 4 dims --- 1 255 80 80
Unsupported (TIDL check) TIDL layer type --- Reshape
Layer 256 -- layer name -- Transpose_209
Input dims size = 5 dims --- 1 3 85 80 80
Layer 256 --- op type - Transpose, Number of input dims 5 != 4 .. not supported by TIDL
Layer 257 -- layer name -- Sigmoid_210
Input dims size = 5 dims --- 1 3 80 80 85
Layer 257 --- op type - Sigmoid, Number of input dims 5 != 4 .. not supported by TIDL
Layer 258 -- layer name -- Slice_215
Input dims size = 5 dims --- 1 3 80 80 85
Layer 258 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 259 -- layer name -- Mul_217
Input dims size = 5 dims --- 1 3 80 80 2
Layer 259 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 260 -- layer name -- Sub_219
Input dims size = 5 dims --- 1 3 80 80 2
Layer 260 --- op type - Sub, Number of input dims 5 != 4 .. not supported by TIDL
Layer 261 -- layer name -- Add_221
Input dims size = 5 dims --- 1 3 80 80 2
Layer 261 --- op type - Add, Number of input dims 5 != 4 .. not supported by TIDL
Layer 262 -- layer name -- Mul_223
Input dims size = 5 dims --- 1 3 80 80 2
Layer 262 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 263 -- layer name -- Reshape_230
Input dims size = 5 dims --- 1 3 80 80 2
Layer 263 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 264 -- layer name -- Expand_239
Input dims size = 4 dims --- 3 80 80 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 265 -- layer name -- Reshape_349
Input dims size = 5 dims --- 1 3 80 80 2
Layer 265 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 266 -- layer name -- ScatterND_350
Input dims size = 5 dims --- 1 3 80 80 85
Layer 266 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 267 -- layer name -- Slice_355
Input dims size = 5 dims --- 1 3 80 80 85
Layer 267 --- op type - Slice, Number of input dims 5 != 4 .. not supported by TIDL
Layer 268 -- layer name -- Mul_357
Input dims size = 5 dims --- 1 3 80 80 2
Layer 268 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 269 -- layer name -- Pow_358
Input dims size = 5 dims --- 1 3 80 80 2
Layer 269 --- op type - Pow, Number of input dims 5 != 4 .. not supported by TIDL
Layer 270 -- layer name -- Mul_359
Input dims size = 5 dims --- 1 3 80 80 2
Layer 270 --- op type - Mul, Number of input dims 5 != 4 .. not supported by TIDL
Layer 271 -- layer name -- Reshape_366
Input dims size = 6 dims --- 1 1 3 80 80 2
Layer 271 --- op type - Reshape, Number of input dims 6 != 4 .. not supported by TIDL
Layer 272 -- layer name -- Expand_375
Input dims size = 4 dims --- 3 80 80 2
Unsupported (import) TIDL layer type --- 0 op type --- Expand
Layer 273 -- layer name -- Reshape_485
Input dims size = 5 dims --- 1 3 80 80 2
Layer 273 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 274 -- layer name -- ScatterND_486
Input dims size = 5 dims --- 1 3 80 80 85
Layer 274 --- op type - ScatterND, Number of input dims 5 != 4 .. not supported by TIDL
Layer 275 -- layer name -- Reshape_489
Input dims size = 5 dims --- 1 3 80 80 85
Layer 275 --- op type - Reshape, Number of input dims 5 != 4 .. not supported by TIDL
Layer 276 -- layer name -- Concat_1378
Input dims size = 3 dims --- 1 19200 85
Layer 276 --- op type - Concat, Number of input dims 3 != 4 .. not supported by TIDL
Layer 277 -- layer name -- Gather_1380
Input dims size = 3 dims --- 1 25500 85
Layer 277 --- op type - Gather, Number of input dims 3 != 4 .. not supported by TIDL
Layer 278 -- layer name -- Cast_1381
Input dims size = 2 dims --- 1 25500
Layer 278 --- op type - Cast, Number of input dims 2 != 4 .. not supported by TIDL
Layer 279 -- layer name -- Greater_1383
Input dims size = 2 dims --- 1 25500
Layer 279 --- op type - Greater, Number of input dims 2 != 4 .. not supported by TIDL
Layer 280 -- layer name -- Gather_1387
Input dims size = 2 dims --- 1 25500
Layer 280 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
Layer 281 -- layer name -- NonZero_1389
Input dims size = 1 dims --- 25500
Layer 281 --- op type - NonZero, Number of input dims 1 != 4 .. not supported by TIDL
Layer 282 -- layer name -- Transpose_1390
Input dims size = 2 dims --- 1 0
Layer 282 --- op type - Transpose, Number of input dims 2 != 4 .. not supported by TIDL
Layer 283 -- layer name -- Squeeze_1391
Input dims size = 2 dims --- 0 1
Layer 283 --- op type - Squeeze, Number of input dims 2 != 4 .. not supported by TIDL
Layer 284 -- layer name -- Split_1384
Input dims size = 3 dims --- 1 25500 85
Layer 284 --- op type - Split, Number of input dims 3 != 4 .. not supported by TIDL
Layer 285 -- layer name -- Squeeze_1385
Input dims size = 3 dims --- 1 25500 85
Layer 285 --- op type - Squeeze, Number of input dims 3 != 4 .. not supported by TIDL
Layer 286 -- layer name -- Gather_1392
Input dims size = 2 dims --- 25500 85
Layer 286 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
Layer 287 -- layer name -- Slice_1422
Input dims size = 2 dims --- 0 85
Layer 287 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 288 -- layer name -- Slice_1417
Input dims size = 2 dims --- 0 85
Layer 288 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 289 -- layer name -- Mul_1423
Input dims size = 2 dims --- 0 1
Layer 289 --- op type - Mul, Number of input dims 2 != 4 .. not supported by TIDL
Layer 290 -- layer name -- ReduceMax_1431
Input dims size = 2 dims --- 0 80
Layer 290 --- op type - ReduceMax, Number of input dims 2 != 4 .. not supported by TIDL
Layer 291 -- layer name -- Reshape_1436
Input dims size = 2 dims --- 0 1
Layer 291 --- op type - Reshape, Number of input dims 2 != 4 .. not supported by TIDL
Layer 292 -- layer name -- Cast_1437
Input dims size = 1 dims --- 0
Layer 292 --- op type - Cast, Number of input dims 1 != 4 .. not supported by TIDL
Layer 293 -- layer name -- Greater_1439
Input dims size = 1 dims --- 0
Layer 293 --- op type - Greater, Number of input dims 1 != 4 .. not supported by TIDL
Layer 294 -- layer name -- NonZero_1441
Input dims size = 1 dims --- 0
Layer 294 --- op type - NonZero, Number of input dims 1 != 4 .. not supported by TIDL
Layer 295 -- layer name -- Transpose_1442
Input dims size = 2 dims --- 1 0
Layer 295 --- op type - Transpose, Number of input dims 2 != 4 .. not supported by TIDL
Layer 296 -- layer name -- Squeeze_1443
Input dims size = 2 dims --- 0 1
Layer 296 --- op type - Squeeze, Number of input dims 2 != 4 .. not supported by TIDL
Layer 297 -- layer name -- ArgMax_1432
Input dims size = 2 dims --- 0 80
Layer 297 --- op type - ArgMax, Number of input dims 2 != 4 .. not supported by TIDL
Layer 298 -- layer name -- Cast_1433
Input dims size = 2 dims --- 0 1
Layer 298 --- op type - Cast, Number of input dims 2 != 4 .. not supported by TIDL
Layer 299 -- layer name -- Slice_1412
Input dims size = 2 dims --- 0 85
Layer 299 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 300 -- layer name -- Div_1425
Input dims size = 2 dims --- 0 1
Layer 300 --- op type - Div, Number of input dims 2 != 4 .. not supported by TIDL
Layer 301 -- layer name -- Slice_1402
Input dims size = 2 dims --- 0 85
Layer 301 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 302 -- layer name -- Add_1429
Input dims size = 2 dims --- 0 1
Layer 302 --- op type - Add, Number of input dims 2 != 4 .. not supported by TIDL
Layer 303 -- layer name -- Slice_1407
Input dims size = 2 dims --- 0 85
Layer 303 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 304 -- layer name -- Div_1424
Input dims size = 2 dims --- 0 1
Layer 304 --- op type - Div, Number of input dims 2 != 4 .. not supported by TIDL
Layer 305 -- layer name -- Slice_1397
Input dims size = 2 dims --- 0 85
Layer 305 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 306 -- layer name -- Add_1428
Input dims size = 2 dims --- 0 1
Layer 306 --- op type - Add, Number of input dims 2 != 4 .. not supported by TIDL
Layer 307 -- layer name -- Sub_1427
Input dims size = 2 dims --- 0 1
Layer 307 --- op type - Sub, Number of input dims 2 != 4 .. not supported by TIDL
Layer 308 -- layer name -- Sub_1426
Input dims size = 2 dims --- 0 1
Layer 308 --- op type - Sub, Number of input dims 2 != 4 .. not supported by TIDL
Layer 309 -- layer name -- Concat_1434
Input dims size = 2 dims --- 0 1
Layer 309 --- op type - Concat, Number of input dims 2 != 4 .. not supported by TIDL
Layer 310 -- layer name -- Gather_1444
Input dims size = 2 dims --- 0 6
Layer 310 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
Layer 311 -- layer name -- Gather_1451
Input dims size = 2 dims --- 0 6
Layer 311 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
Layer 312 -- layer name -- Unsqueeze_1453
Input dims size = 1 dims --- 0
Layer 312 --- op type - Unsqueeze, Number of input dims 1 != 4 .. not supported by TIDL
Layer 313 -- layer name -- Unsqueeze_1454
Input dims size = 2 dims --- 1 0
Layer 313 --- op type - Unsqueeze, Number of input dims 2 != 4 .. not supported by TIDL
Layer 314 -- layer name -- Slice_1449
Input dims size = 2 dims --- 0 6
Layer 314 --- op type - Slice, Number of input dims 2 != 4 .. not supported by TIDL
Layer 315 -- layer name -- Unsqueeze_1452
Input dims size = 2 dims --- 0 4
Layer 315 --- op type - Unsqueeze, Number of input dims 2 != 4 .. not supported by TIDL
Layer 316 -- layer name -- NonMaxSuppression_1457
Input dims size = 3 dims --- 1 0 4
Layer 316 --- op type - NonMaxSuppression, Number of input dims 3 != 4 .. not supported by TIDL
Layer 317 -- layer name -- Gather_1459
Input dims size = 2 dims --- 0 3
Layer 317 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
Layer 318 -- layer name -- Squeeze_1460
Input dims size = 2 dims --- 0 1
Layer 318 --- op type - Squeeze, Number of input dims 2 != 4 .. not supported by TIDL
Layer 319 -- layer name -- Gather_1462
Input dims size = 2 dims --- 0 6
Layer 319 --- op type - Gather, Number of input dims 2 != 4 .. not supported by TIDL
runtimes_visualization

Model makes no object classification

❔Question

Just confirming my understanding, that is that the model assumes all objects are humans and thus makes no classification on the type of object correct? (i.e. in contrast to say yolov5 that will also classify the types of objects in the bounding box ~ dog, cat etc)

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.