Coder Social home page Coder Social logo

pinto0309 / openvino-yolov3 Goto Github PK

View Code? Open in Web Editor NEW
537.0 31.0 168.0 95.92 MB

YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO

Home Page: https://qiita.com/PINTO

License: Apache License 2.0

Python 71.79% Shell 2.65% C++ 25.56%
openvino yolov3 deep-learning deeplearning object-detection python opencv tensorflow cpu ncs

openvino-yolov3's People

Contributors

aqsaghaffarr avatar gillamkid avatar kndt84 avatar pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openvino-yolov3's Issues

ImportError: No module named 'openvino'

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): RaspberryPi3

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): arm

ありがとうございました
I have already configurated environment in RaspberryPi3 with OpenVINO, but when I ran program of YOLO, this error "ImportError: No module named 'openvino" always occurred. Do you know how to solve this error? Thank you

I get the ~same FPS by using -numncs 1 and -numncs 2

I should get 2x more FPS by using -numncs, isn't it?

[Required] Custom device Atom CPU + 2 x MyriadX (VPU) via mini-PCI-express

[Required] Your device's CPU architecture x86_64 Intel Atom E3845

[Required] Ubuntu16.04.6 LTS (OpenVINO 2019.1.144)

[Required] Details of the work you did before the problem occurred:
I get the ~same FPS by using -numncs 1 and -numncs 2.

  • I get 10.7 FPS by using -numncs 1
    image

  • And I get 10.3 FPS by using -numncs 2
    image


By running multistick_cpp example from OpenVINO, I get this message:
image

python_ncs

IR model results different from PB model

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): Laptop PC

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu1604

[Required] Details of the work you did before the problem occurred:

  1. convert weight file into PB file
    python3 convert_weights_pb.py --class_names coco.names --weights_file yolov3.weights --data_format NHWC --output_graph pbmodels/frozen_yolo.pb

  2. verify pb file
    python3 demo.py --class_names coco.names --data_format NHWC --frozen_model pbmodels/frozen_yolo.pb --input_img dog.jpg --output_img dog_pb.jpg

  3. convert pb file to IR file
    python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model pbmodels/frozen_yolo.pb --output_dir lrmodels/YoloV3/FP16/ --data_type FP16 --batch 1 --tensorflow_use_custom_operations_config yolo_v3_changed.json

  4. detect using IR files
    python3 openvino_yolov3_test_img_cam.py -d MYRIAD -m lrmodels/YoloV3/FP16/frozen_yolo.xml -i dog.jpg

[Required] Error message:

the dog_pb.jpg is normal. Confidence of the bicycle is 0.994%

However, the output of IR model is not good. The confidence of the detected biccyle is only 65.9%


[Required] Overview of problems and questions:

Anything wrong with the IR conversion?
Below is the detection using pb file:

dog_pb

Below is the detection using IR file:
output

demo.py, openvino_yolov3_test_img_cam.py is attached as:
scripts.zip

model files are too big to attach. Please see link:
https://drive.google.com/file/d/1tN0LlUpazH0FJGoSIeiQ0HUBb_UPZnny/view?usp=sharing

Appreciate your inputs. thanks.

How about the inference accuracy

Hi,
great work. I'm also trying to use yolov3 in openvino. Have you checked the inference accuracy for the converted yolov3 model? I trained my own dataset with new model(based on the structure of yolov3). It works fine in the darknet, but the inference accuracy drops a lot. I found that the conversion from darknet to tensorflow may have some problems. The converted tensorflow model performs badly for inference, the accuracy drops a lot. Have you tried other yolov3-based model, especially your own model?

Unable to run yolov3_tiny with different input_size

Hi,

First of all, I have successfully implement openvino_tiny-yolov3_test.py.
Then, I would like to trade the accuracy for speed by reducing the input_size (416 -> 320), I have successfully achieve this in yolov3. However, I am not able to achieve this in yolov3_tiny, please see the below details:

get model
wget https://pjreddie.com/media/files/yolov3-tiny.weights

convert yolo to tensorflow
python3 /opt/OpenVINO-YoloV3/convert_weights_pb.py --class_names /opt/darknet/data/coco.names --weights_file /opt/darknet/yolov3-tiny.weights --data_format NHWC --output_graph /data/train/frozen_darknet_yolov3_tiny_model.pb --size 320 --tiny

convert tensorflow to IR
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model /data/train/frozen_darknet_yolov3_tiny_model.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_tiny.json --data_type FP16 --batch 1 --output_dir /data/train/

try to test with python(On Raspberry Pi Stretch with NCS2)

import os

try:
    from armv7l.openvino.inference_engine import IENetwork, IEPlugin
except:
    from openvino.inference_engine import IENetwork, IEPlugin
    
model_xml = "/home/pi/models/frozen_darknet_yolov3_tiny_model.xml"
model_bin = os.path.splitext(model_xml)[0] + ".bin"

plugin = IEPlugin(device="MYRIAD")
net = IENetwork(model=model_xml, weights=model_bin)
input_blob = next(iter(net.inputs))
exec_net = plugin.load(network=net)

Then error occurs:

Traceback (most recent call last):
File "yolov3_tiny_test.py", line 14, in
exec_net = plugin.load(network=net)
File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: [VPU] Internal error: Output in detector/yolo-v3-tiny/pool2_5/MaxPool has incorrect width dimension. Expected: 9 or 9 Actual: 10

RuntimeError: AssertionFailed: newDims[newPerm[i]]=1 (Rpi+NCS(#of 2))

Hi, I met big problem with my own trained model with uber dataset.
converting .pb to .xml & .bin was successful without any error.
But, when i inference with code, " RuntimeError: AssertionFailed: newDims[newPerm[i]]=1" are appeared as follows .
File "ie_api.pyx". line 389, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx". line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: AssertionFailed: newDims[newPerm[i]]=1

how can i fix this error. should I reconvert my on model with other options?
i use just basic options.

Convert Tensorflow model to OpenVINO model problem

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
PC
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
Intel I7-9700K
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Ubuntu 16.04
[Required] Details of the work you did before the problem occurred:

When convert the YoloV3 model to Tensorflow model,It's seems

to successful.But,When I want to go to next step,convert the Tensorflow model to
OpenVINO model.And than,the issue was occurred.


[Required] Error message:

sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --
input_model pbmodels/voc_weights_20190620.pb --output_dir lrmodels/YoloV3/FP32/ --
data_type FP32 --batch 1 --tensorflow_use_custom_operations_config
yolo_v3_changed_voc_20190620.json

Model Optimizer arguments:

Common parameters:

- Path to the Input Model: /home/e312/OpenVINO-
YoloV3/pbmodels/voc_weights_20190620.pb

- Path for generated IR: /home/e312/OpenVINO-YoloV3/lrmodels/YoloV3/FP32/

- IR output name: voc_weights_20190620

- Log level: ERROR

- Batch: 1

- Input layers: Not specified, inherited from the model

- Output layers: Not specified, inherited from the model

- Input shapes: Not specified, inherited from the model

- Mean values: Not specified

- Scale values: Not specified

- Scale factor: Not specified

- Precision of IR: FP32

- Enable fusing: True

- Enable grouped convolutions fusing: True

- Move mean values to preprocess section: False

- Reverse input channels: False

TensorFlow specific parameters:

- Input model in text protobuf format: False

- Offload unsupported operations: False

- Path to model dump for TensorBoard: None

- List of shared libraries with TensorFlow custom layers implementation: None

- Update the configuration file with input/output node names: None

- Use configuration file used to generate the model with Object Detection API: None

- Operations to offload: None

- Patterns to offload: None

- Use the config file: /home/e312/OpenVINO-
YoloV3/yolo_v3_changed_voc_20190620.json

Model Optimizer version: 1.5.12.49d067a0

[ ERROR ] List of operations that cannot be converted to IE IR:

[ ERROR ] LeakyRelu (72)

[ ERROR ] detector/darknet-53/Conv/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_1/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_2/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_3/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_4/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_5/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_6/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_7/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_8/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_9/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_10/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_11/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_12/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_13/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_14/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_15/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_16/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_17/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_18/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_19/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_20/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_21/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_22/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_23/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_24/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_25/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_26/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_27/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_28/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_29/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_30/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_31/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_32/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_33/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_34/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_35/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_36/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_37/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_38/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_39/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_40/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_41/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_42/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_43/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_44/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_45/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_46/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_47/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_48/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_49/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_50/LeakyRelu

[ ERROR ] detector/darknet-53/Conv_51/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_1/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_2/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_3/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_4/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_7/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_8/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_9/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_10/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_11/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_12/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_13/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_15/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_16/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_17/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_18/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_19/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_20/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_21/LeakyRelu

[ ERROR ] detector/yolo-v3/Conv_5/LeakyRelu

[ ERROR ] Part of the nodes was not translated to IE. Stopped.

For more information please refer to Model Optimizer FAQ
(<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.




[Required] Overview of problems and questions:



Please help me to solve this problem!Thank you.

Tiny yolov2

How can i edit this code to add tiny yolov2 support?

multiple cpu cores

Hi,

Thank you very much for your guide and I finally got customised trained yolov3 tiny version working on my laptop. Also the yolov3 version is working at 4~5FPS. My pc shows 1 cpu is utilised at 100%, it is possible to allocate the computation to multiple cpus? I can tell the batch size is forced to 1, would it work in async with higher batch size and potentially increase over all inference speed when running a larger model like yolov3?

8-bit blob vs 32-bit blob

Hi again!,
Here I want to remind a subtle point, recently I learnt from an Intel guy, at this forum question.

There, he suggested to generate blob with ddepth=cv.CV_8U, which improved performance on NCS2 (for Mobilnet+SSD). That can be applicable to this Yolov3 too.

Can we use AAEON © BOXER-6405 to run these examples?

I am wondering if these Yolo examples can bu run on an edge device called © BOXER-6405.
When I tried to run cpp example using this command: ./object_detection_demo_yolov3_async -i <path_to_video>/inputVideo.mp4 -m <path_to_model>/frozen_yolo_v3.xml -l ../lib/libcpu_extension.so -d CPU I got following error:
_**Cannot find plugin to use :Tried load plugin : MKLDNNPlugin, error: Plugin MKLDNNPlugin cannot be loaded: cannot load plugin: MKLDNNPlugin from ../lib: Cannot load library '../lib/libMKLDNNPlugin.so': ../lib/libMKLDNNPlugin.so: cannot open shared object file: No such file or directory, skipping
cannot load plugin: MKLDNNPlugin from : Cannot load library 'libMKLDNNPlugin.so': libmkl_tiny_omp.so: cannot open shared object file: No such file or directory, skipping

**_
These are hardware specs of BOXER-6405:

CPU : Intel® Celeron/Pentium™ processor

Chipset : Intel® System on Chip
System Memory : DDR3L SO-DIMM slot x 1 Supports 1867 MHz and up to 8GB
Display Interface : HDMI (max. 1920 x 1080)
Storage Device : mSATA

Open Vino 2019 not giving any results

[Required] Device : CPU i5

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): x86_64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Windows

[Required] Details of the work you did before the problem occurred:
I tried to run the openvino_tiny-yolov3_test.py file with OpenVINO 2018 and my trained Yolo v3 tiny network. The inference works well with CPU but when I try with NCS2 (using -d MYRIAD), the result gives a lot of false positives. As is visible in the image attached.
Later I found on this forum, https://software.intel.com/en-us/node/804818#comment-form that the issue is with OpenVino version with has a bug for in MYRIAD plugin.

So, I installed OpenVINO 2019 on the same machine, configured the inference engine, checked its installation by running sample code, then transformed the network into IR model by using mo_tf.py of openVINO 2019.
When I re-ran openvino_tiny-yolov3_test.py with this openVINO 2019, it gave no results. Only one frame appears on the screen after running the code and then the code shuts down without any error message. Did I miss something?
image

deployment to AWS Iot Core Greengrass

I wonder if anyone has attempted to deploy the application with Greengrass? the platform seems to be restricted to only python2.7 and 3.7 while the openvino-raspbian runs on python 3.5 and so I tried to setup the greengrass with python 3.7 and when deployed I got this error:

image

conversion compatibility with custom architecture of yolov3-tiny

Hi @PINTO0309 ,

thank you for your great supports and dedicated work as also found your discussions on the Intel Community Forum very helpful.

I open this issue in order to just ask whether you have ever tried to convert YOLOv3-tiny model with a custom 3 [yolo] layers into IR before? if not do you think the Model Optimizer could convert such variant model?

the custom architecture config file is derived from this repo in the How to Improve Object Detection section

detection accuracy: displaced detection box

Hello,
First I should say thank you for your great project. I found an issue in your project.

I saw the detection boxes, are a bit displaced in some inputs (probably when the aspect ratio of input image is not 1:1). I think there is an issue in your scaling math.
Just as a test, I replaced your scaling math with resizing into a 1:1 square. I know not keeping the aspect ratio is not a correct way and it reduces detection accuracy, but anyway, it proves that there is no problem with model, but with your math calculation: see the outputs in below:
your output:
Pinto
my output:
mine

出现了这种问题

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
RaspberryPi3
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Raspbian
[Required] Details of the work you did before the problem occurred:

I use the xml and bin files you have converted.

Then run on the Raspberry Pi
[Required] Error message:

this demo only acceptes networks with three layers

[Required] Overview of problems and questions:

run ./object_detection_demo_yolov3_async -i cam -m /home/xs/tensorflow_tools/E:\OpenVINO\OpenVINO-YoloV3-master\lrmodels\tiny-YoloV3\FP16\frozen_tiny_yolo_v3.xml -d MYRIAD display this demo only acceptes networks with three layers



[ ERROR ] Following layers are not supported by the plugin for specified device CPU

Hi, I have a problem with the sample. Would you please help me?

OpenVINO docker,
Ubuntu 1604,
x86_64,

I followed the steps in your script.txt file to generate the *.xml and *.bin file of Yolo-v3 model,
and use python3 to run the sample in /opt/intel/openvino/inference_engine/samples/python_samples/object_detection_demo_yolov3_async
I got error as flowing:

root@9b5f38ae6a55:/opt/intel/openvino/inference_engine/samples/python_samples/object_detection_demo_yolov3_async# python3 object_detection_demo_yolov3_async.py -m frozen_tiny_yolo_v3.xml -i ../../../../deployment_tools/demo/video.mp4 -d CPU
[ INFO ] Loading network files:
frozen_tiny_yolo_v3.xml
frozen_tiny_yolo_v3.bin
[ ERROR ] Following layers are not supported by the plugin for specified device CPU:
detector/yolo-v3-tiny/Conv_9/BiasAdd/YoloRegion, detector/yolo-v3-tiny/ResizeNearestNeighbor, detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion
[ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument

There is a problem with the weight of training yourself.

There is a problem with the weight of training yourself(One classe).
I followed your script.txt
FP32 (CPU): frozen_tiny_yolo_v3.xml and frozen_tiny_yolo_v3.bin test is normal.
FP16 (MYRIAD): frozen_tiny_yolo_v3.xml and frozen_tiny_yolo_v3.bin test showed a lot of abnormal noise.
I am trying to modify yolo_v3_tiny_changed.json:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 1,
"coords": 4,
"num": 6,
"mask": [0,1,2],
"jitter":0.3,
"ignore_thresh":0.7,
"truth_thresh":1,
"random":1,
"anchors":[10,14,23,27,37,58,81,82,135,169,344,319],
"entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
}
}
]

I don't know where I need to modify it.

Can you help me to solve this problem,thank you.

PiCamera


Does it work using Picamera (not USBCamera)?

My Environment is Raspi 3(B+) + Rasbian(OS) + Picamera + NCS2

I got below error
(python3:3413): GStreamer-CRITICAL **: gst_element_get_state: assertion 'GST_IS_ELEMENT(element)' failed VIDEOIO ERROR: V4L: can't open camera by index 0

Bounding box size

I tested a tiny Yolov3/FP16/MYRIAD model (20 classes VOC) with openvino_tiny-yolov3_test.py and the provided cpp project. The accuracy is bad so I had to increase the threshold to 0.7
The funny thing is that the python and cpp implementation draw boxes of different size, the cpp implementation drawing boxes x1.5 to x2 larger that the python demo. Any idea about this ?

./build_samples.sh error: comparison between signed and unsigned integer expressions

My device is LaptopPC
Ubuntu16.04 64bit
Process: intel core i7-4650U
Graphics: intel Haswell Mobile
intel Movidius Neural Compute Stick V1
[Required] Details of the work you did before the problem occurred:

I need to run object detection yolov3 async. I follow prepare the environment and recompile have a error message.

[Required] Error message:

/opt/intel/openvino_2019.1.133/deployment_tools/inference_engine/samples/object_detection_demo_yolov3_async/main.cpp:420:35: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
for (int i = 0; i < objects.size(); ++i) {

Selection_006

How can i fix this?
Thank you

what is the meaning of num = 9 in yolo_v3_changed.json?

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): Laptop PC

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): x86_64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu1604

[Required] Details of the work you did before the problem occurred:
I changed the config in darknet training by reducing anchors' number. I am wondering if the parameter num=9 is related to the number of anchors.
[Required] Error message:
num = number of anchors?
[Required] Overview of problems and questions:
Need more explanation of num and mask parameter in .json file. Thanks.

-Jeff.

RuntimeError: Cannot find plugin to use

File "openvino_tiny-yolov3_test.py", line 245, in
sys.exit(main_IE_infer() or 0)
File "openvino_tiny-yolov3_test.py", line 174, in main_IE_infer
plugin = IEPlugin(device=args.device)
File "ie_api.pyx", line 387, in openvino.inference_engine.ie_api.IEPlugin.cinit
RuntimeError: Cannot find plugin to use :

Can you also provide a step by step guide for setting up the openvino in raspberry pi and how to run this project in raspberry pi?

Syncing the inference to the images

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
LaptopPC
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
x86_64
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Ubuntu 16.04
[Required] Details of the work you did before the problem occurred:

Followed the instructions

[Required] Error message:

No error

[Required] Overview of problems and questions:
It seems that the inference and the playback rate are almost the same (Around 8.5-9fps) but the inference boxes seems to lag the frame. Is there a way to correctly sync the frames to the inference in imshow?

Unable to create mo_tf.py using converted model

How to convert darknet .weights and .cfg files to model optimizer format.
I have used:

python3 /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo_tf.py --input_model /home/vadmin/Desktop/WorkFiles/OpenVINO-YoloV3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3]

Getting the following error:

Model Optimizer version:        1.5.12.49d067a0
[ ERROR ]  Shape [ -1 416 416   3] is not fully defined for output 0 of "inputs". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "inputs".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "inputs".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7f28674b49d8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "inputs" node.

YOLOV3 can't run on NCS2

API version ............ 1.4
Build .................. 17328
Description ....... myriadPlugin

[ INFO ] Loading network files
[ INFO ] Batch size is forced to 1.
[ INFO ] labels size is: 80
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): Cannot convert layer "detector/yolo-v3/ResizeNearestNeighbor" due to unsupported layer type "Resample"

i have a question for you

C:\Intel\OpenVINO-YoloV3-master>python openvino_tiny-yolov3_test.py
Traceback (most recent call last):
File "openvino_tiny-yolov3_test.py", line 250, in
sys.exit(main_IE_infer() or 0)
File "openvino_tiny-yolov3_test.py", line 178, in main_IE_infer
plugin.add_cpu_extension("lib/libcpu_extension.so")
File "ie_api.pyx", line 423, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension
File "ie_api.pyx", line 427, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension
RuntimeError: Cannot load library 'lib/libcpu_extension.so': 193 from cwd: C:\Intel\OpenVINO-YoloV3-master

hello, when i try this code,他出现了这个错误,你能帮帮我吗

openvino_tiny-yolov3_MultiStick_test.py variables mean?

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
RaspberryPi3 + NCS2
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Raspbian

openvino_tiny-yolov3_MultiStick_test.py

yolo_scale_13 = 13
yolo_scale_26 = 26
yolo_scale_52 = 52

classes = 80
coords = 4
num = 3
anchors = [10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319]

Can you tell me what the above variables mean?

Single Class Conversion : ValueError: cannot reshape array of size 4607 into shape (18,256,1,1)

python3 convert_weights_pb.py --class_names /opt/person_detection_yolov3_tiny/obj.names --weights_file weights/yolov3-tiny_obj_80000.weights --data_format NHWC --tiny --output_graph pbmodels/frozen_tiny_yolov3_person_detection.pb
Traceback (most recent call last):
File "convert_weights_pb.py", line 60, in
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "convert_weights_pb.py", line 42, in main
load_ops = load_weights(tf.global_variables(scope='detector'), FLAGS.weights_file)
File "/opt/github2/OpenVINO-YoloV3/utils.py", line 122, in load_weights
(shape[3], shape[2], shape[0], shape[1]))
ValueError: cannot reshape array of size 4607 into shape (18,256,1,1)

improvements with openvino_tiny-yolov3_test.py

Hi,

Thanks for the python script to increase test accuracy for yolov3-tiny.
I trained yolov3-tiny from darknet, on my own dataset, hence changed class labels according to my data. Converted the model into (.pb) file. Further converted this (.pb) model to IR and Bin files using OpenVino toolkit.

I'm using your python script (openvino_tiny-yolov3_test.py) to preprocess and postprocess my detections from Movidius (Intel's Compute Stick). I have changed the labels as per my need.
The problem is that I'm getting some False Positives in the result. Can you please guide me what kind of tweaks I can make to your script so that it may adapt to my testing environment?

Thanks for help.

Why RegionYolo's mask are the same across different scales?

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
Intel NUC
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
x86_64
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Ubuntu 16.04
[Required] Details of the work you did before the problem occurred:



Run openvino_yolov3_test.py


[Required] Error message:
N/A
[Required] Overview of problems and questions:



I noticed that in your yolov3 xml model, the three RegionYolo layers all have the same mask, i.e., 3,4,5. Is there a particular reason why? Shouldn't that be (0,1,2), (3,4,5), (6,7,8) respectively? Thanks.

Can't convert to bin and xml file?

when i convert my yolov3.onnx to bin and xml file , i get a error? is it not supported?
env: win10_x64, python3.7+pytorch1.1+onnx1.5+OpenVINO2019.1.148
cmd input: python mo.py --input_model yolov3.onnx
"yolov3.onnx" come from here

Model Optimizer version: 2019.1.1-83-g28dfbfd
[ ERROR ] Cannot pre-process ONNX graph after reading from model file "C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\yolov3.onnx". File is corrupt or has unsupported format. Details: Reference to y3:01 is not satisfied. A node refer not existing data tensor. ONNX model is not consistent. Protobuf fragment: input: "y3:01"
output: "TFNodes/yolo_evaluation_layer_1/Shape_3:0"
name: "TFNodes/yolo_evaluation_layer_1/Shape_3"
op_type: "Shape"
.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #44.

Unable to get correct results from self trained yolo v3 weights

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): laptop PC

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): x86_64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu1604

[Required] Details of the work you did before the problem occurred:
Hi, this is a great project. I am very impressed. I was able to follow the instruction to convert yolo_v3.weights downloaded to .pb file and then IR model. Test results are good.

[Required] Error message:
However, I was not able to make it work on my own trained weight file. The trained weight file works fine in darknet.

[Required] Overview of problems and questions:
With the same command line, the .pb file generated would not produce any detection. Using darknet command line, the result is perfectly fine.

Here is what I have done:

python3 convert_weights_pb.py --class_names Milk_8.names --weight_files yolov3-Milk_20000.weights --data_format NHWC --output_graph pbmodels/frozen_milk8.pb

9
detect_1.shape = (?, 507, 14)
detect_2.shape = (?, 2028, 14)
detect_3.shape = (?, 8112, 14)
detections.shape = (?, 10647, 14)
Tensor("detector/yolo-v3/detections:0", shape=(?, 10647, 14), dtype=float32)
detector/yolo-v3/detections:0
2019-03-14 20:48:52.768352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
1330 ops written to pbmodels/frozen_milk8.pb.

cd ../tensorflow-yolo-v3-master

python3 demo.py --class_names ../OpenVINO-YoloV3-master/Milk_8.names --data_format NHWC --frozen_model ../OpenVINO-YoloV3-master/pbmodels/frozen_milk8.pb --input_img 10_2.jpg --output_img 10_2_pb.jpg
Loaded graph in 0.92s
2019-03-14 20:49:41.881125: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
tensorflow-yolo-v3-master/utils.py:215: RuntimeWarning: invalid value encountered in less
iou_mask = ious < iou_threshold
Predictions found in 4.61s

There was nothing detected. The same weight file could be use to detect everything with darknet.

I put weight file , pb file and test image here. Really appreciate if you could help me out.
https://drive.google.com/drive/folders/1lwaj6ttnszbbZp7M3Ha7WDS-GffgE3jm?usp=sharing

Thanks.

-Jeff

List of operations that cannot be converted to IE IR (YoloV3 & TinyYoloV3)

Hello and thank you for this great repository,

It is really nice that someone is providing all this commands from the script which is really helpful and also all this benchmarks for different devices - Thank you again!!

The problem that i'm facing is that when i'm trying to Convert the .pb files of YOLO & Tiny_yolo that was produced from convert_weights_pb.py i get this error (Tested on both Windows and Linux) :

[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (11)
[ ERROR ] detector/yolo-v3-tiny/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_5/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_6/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_8/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.

If i convert the downloaded .pb files that you provide directly it works okay. Did you try anything else in order to convert the weights to .pb file? I tried both version 445 - 455 versions of OpenVINO and nothing seems to bypass this error. Here are the command that i run:

Input: python convert_weights_pb.py --class_names coco.names --weights_file weight
s/yolov3-tiny.weights --data_format NHWC --tiny --output_graph pbmodels/frozen_tiny_yolo_v3.pb

Output: 299 ops written to pbmodels/frozen_tiny_yolo_v3.pb. (Everything seems okay here)

Input: python C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools/model_opti
mizer/mo_tf.py --input_model .\pbmodels\frozen_tiny_yolo_v3.pb --output_dir lrmodels/YoloV3/FP32 --data_type FP32 --batch 1 --tensorflow_use_custom_operations_config .\yolo_v3_tiny_changed.json
Output: The error that i mentioned above

Input: python C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools/model_opti
mizer/mo_tf.py --input_model .\pbmodels\frozen_tiny_yolo_v3.pb --output_dir lrmodels/YoloV3/FP16 --data_type FP16 --bat
ch 1 --tensorflow_use_custom_operations_config .\yolo_v3_tiny_changed.json
Output: The error that i mentioned above

It's been three days that i'm trying to find out what is causing that but i'm not able to find it and i will really appreciate it if you have something in mind. I need to check that i can convert this files because i want to test it on custom configuration and objects, so if i cannot convert correctly the weights from Darknet, i cannot change the scripts for my Custom structure/object detection.

Thank you in advance,

LaptopPC - Both tested on Linux & Windows 10
X86_64 - i5-8250U
Linux Mint - I managed to edit OpenVINO installation process in order to be able to install it - Everything from the Samples and your code works

P.S: The is another issue when you run openvino_tiny-yolov3_MultiStick_test.py on Video with higher resolutions, the bounding boxes are not scale well but is something that i'm working on and i hope that i will be able to find a solution and maybe contribute in this great repository. Attached you can find the video that i tested the script and the boxes are not scaled well. (Just for testing purposes :)

street.zip

)

RuntimeError: Cannot load library 'lib/libcpu_extension.so'

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
MacBook pro 2014
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
x86_64
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
MacOs
[Required] Details of the work you did before the problem occurred:






[Required] Error message:
wupengdeMacbook-Pro:OpenVINO-YoloV3 wupeng$ python3 openvino_yolov3_test.py
Traceback (most recent call last):
File "openvino_yolov3_test.py", line 248, in
sys.exit(main_IE_infer() or 0)
File "openvino_yolov3_test.py", line 181, in main_IE_infer
plugin.add_cpu_extension("lib/libcpu_extension.so")
File "ie_api.pyx", line 584, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension
File "ie_api.pyx", line 588, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension
RuntimeError: Cannot load library 'lib/libcpu_extension.so': dlopen(lib/libcpu_extension.so, 1): no suitable image found. Did find:
lib/libcpu_extension.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03
/Users/wupeng/source_code/OpenVINO-YoloV3/lib/libcpu_extension.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03






[Required] Overview of problems and questions:
When I run openvino_yolov3_test.py ,these is a error





report error while loading shared libraries: libcpu_extension.so

hello,
thank you for your contribution.
I followed the same steps in your script, but i am getting the following errors when i do this command:
cpp/object_detection_demo_yolov3_async -i path_to_data.mp4 -m abs_path/lrmodels/tiny-YoloV3/FP16/frozen_tiny_yolo_v3.xml -d MYRIAD -t 0.2

error:
./cpp/object_detection_demo_yolov3_async: error while loading shared libraries: libcpu_extension.so: cannot open shared object file: No such file or directory

i am ture have lib file in my OpenVINO-YoloV3 project. Can you help me to solve this problem,thank you

Instructions on conversion from a custom YOLOv3 network

I have a trained YOLO network with 5 classes. Steps that I took.

  1. Convert from my YOLO weight file (trained using Darknet C++ model) to IR:

Conversion to pb:
python3 convert_weights_pb.py --class_names obj.names --data_format NHWC --weights_file k-yolo-obj_last.weights

My yolo_v3.json file:

  {
    "id": "TFYOLOV3",
    "match_kind": "general",
    "custom_attributes": {
      "classes": 5,
      "coords": 4,
      "num": 9,
      "mask": [0, 1, 2],
      "jitter":0.3,
      "ignore_thresh":0.5,
      "truth_thresh":1,
      "random":1,
      "anchors":[10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326],
      "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
    }
  }
]

Conversion from pb file to IR:
python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model ~/tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config yolo_v3.json --batch 1 --data_type FP16

The conversions were succesful.

  1. Test the IR model:
    In your script openvino_yolov3_test.py, I changed the class number to 5, and labels into my label names. However, the result when running on NCS2 is very bad compared to the the original weight testing with Darknet commands.

Do I need to make corrections of the test file (m_input_size, camera_width, camera_height) or in any other steps to make it work?

Thanks!

IR model Results Really Bad

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): LaptopPC

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): I7 8700K

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Windows Subsystem Linux(Ubuntu1804) with 2019R1 openvino

[Required] Overview of problems and questions:
Thanks for your great project. The IR model perform very bad, and different from the PB result. Here is a example:(Yellow is pb model)
result
I also see some related issues like #23. But I think it's not the bug of the code, because the boxes is not displaced, but also have many wrong predictions. I wonder if the conversion process of mine is wrong.
Here is the json:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 4,
"coords": 4,
"num": 9,
"mask": [3,4,5],
"jitter":0.3,
"ignore_thresh":0.7,
"truth_thresh":1,
"random":1,
"anchors":[10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326],
"entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
}
}
]

RuntimeError: Error reading network

device: Raspberry Pi 3 B+

CPU architecture: Intel Compute Stick 2 (MYRIAD)

OS: Raspbian Stretch Lite (April 2019)

What I did:

  1. Download original yolov3-tiny.weights from https://pjreddie.com/media/files/yolov3-tiny.weights
  2. Convert yolov3-tiny.weights to tf model:
python3 tensorflow-yolo-v3/convert_weights_pb.py 
--class_names coco.names --data_format NHWC 
--weights_file yolov3-tiny.weights --tiny
  1. Convert tf model to IR:
python3 mo_tf.py --input_model frozen_darknet_yolov3_model.pb 
--tensorflow_use_custom_operations_config yolo_v3_tiny.json 
--input_shape [1,416,416,3]
  1. copy frozen_darknet_yolov3_model.xml and frozen_darknet_yolov3_model.bin to Raspberry Pi
  2. run python3 openvino_tiny-yolov3_test.py -d MYRIAD on Raspberry Pi

Error message:

pi@raspberrypi:~ $ python3 openvino_tiny-yolov3_test.py -d MYRIAD
Traceback (most recent call last):
  File "openvino_tiny-yolov3_test.py", line 239, in <module>
    sys.exit(main_IE_infer() or 0)
  File "openvino_tiny-yolov3_test.py", line 168, in main_IE_infer
    net = IENetwork(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: in Layer detector/yolo-v3-tiny/pool2/MaxPool: 
trying to connect an edge to non existing output port: 2.1

Overview of problems and questions:
I'm not quite sure what this error means. Is there a difference between the yolov3-tiny.weights used in this repository and the original ones? And if so, how can I get the openvino_tiny-yolov3_test.py running with the original yolov3-tiny.weights?

When I use the files from OpenVINO-YoloV3/lrmodels/tiny-YoloV3/FP16/, the error doesn't show up.

I tried to adapt the openvino_tiny-yolov3_test.py by changing the num = 3 variable to num = 6 since the original yolov3-tiny.cfg uses this number. But this didn't make a difference.

Problem decoding output using opencv dnn module.

RaspberryPi3
Raspbian
Hey,
I am trying to use Opencv Dnn module instead of IE. But I am not able to figure out the output which we are getting from the last layers. I am getting 255x13x13 Tensor so, I am slicing it up and taking:-
(1) Confidence from index 4.
(2)X,Y,W,H from index 0,1,2,3.
(3)Probabilities from 5:85.
Can you please guide if we are doing it wrong.
I have gone through your code and original pjreddie code. Thing is we don't want to flatten the output and use EntryPoints.

A text-detection project

@PINTO0309
Currently I am working on a simple detection project, but I am facing some difficulties.
I want to basically a create a Text-detection model.
It would be great if you are interested, as I am willing to pay.
Waiting for your reply

Can't get a custom tiny-yolov3 model to work (RPi+NCS2)

I'm using the following commands to convert a tiny yolov3 weights file to a .bin/.xml pair:

From https://github.com/mystic123/tensorflow-yolo-v3:

python convert_weights_pb.py \
        --weights_file custom_tiny_yolov3.weights \
        --class_names custom_tiny_yolov3.names \
        --data_format NHWC \
        --tiny \
        --output_graph custom_tiny_yolov3.pb
sudo python3 /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo_tf.py \
        --input_model custom_tiny_yolov3.pb \
        --output_dir custom_tiny_yolov3/ \
        --data_type FP16 \
        --batch 1 \
        --tensorflow_use_custom_operations_config modified-tiny-yolov3.json

My modified-tiny-yolov3.json file looks like this:

[
  {
    "id": "TFYOLOV3",
    "match_kind": "general",
    "custom_attributes": {
      "classes": 6,
      "coords": 4,
      "num": 6,
      "mask": [0,1,2],
      "entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
    }
  }
]

Modified the openvino_tiny-yolov3_test.py: classes, num and LABELS variables to reflect the custom tiny yolov3 model; changed the model_xml variable in the "main_IE_infer()" function to point to the folder containing the .bin and .xml files.

classes = 6
coords = 4
num = 6
anchors = [10,14, 23,27, 37,58, 81,82, 135,169, 344,319]
LABELS = ("SIX", "DIFFERENT", "LABELS", "ARE", "INCLUDED", "HERE")

After executing "python3 openvino_tiny-yolov3_test.py -d MYRIAD" I always get errors like:

Traceback (most recent call last):
  File "openvino_tiny-yolov3_test.py", line 244, in <module>
    sys.exit(main_IE_infer() or 0)
  File "openvino_tiny-yolov3_test.py", line 203, in main_IE_infer
    objects = ParseYOLOV3Output(output, m_input_size, m_input_size, camera_height, camera_width, 0.2, objects)
  File "openvino_tiny-yolov3_test.py", line 125, in ParseYOLOV3Output
    scale = output_blob[obj_index]
IndexError: index 25012 is out of bounds for axis 0 with size 22308

I know your version is very early, and I really appreciate the effort. If you have any idea why I'm getting this error, it would be great if you could help me fix it. It seems to me the problem is related to the conversion from .weights/.cfg to .bin/.xml, since I've tried the same with the default tiny yolov3 files and I got a similar error. However, I've tried following every advice from the Intel forums, with no success. Please let me know if you need additional information.

Thanks a lot!

change the resolution

Hi!
I tried to work with your converted IR models with different input resolutions (other than your 418), but it seems it does not accept anything other than 418. did you check this?
As I tested, original darknet model, can do so. I saw working with 208x208, gives x3 inference FPS on computer CPU, while precision is not very bad (below pictures). What do you think? should we convert IR models for different resolutions?

Yolov3, blob resolution: 418x418
Yolov3_418x418

Yolov3, resolution 208x208:
Yolov3_208x208

face detect

Hello, thanks your share about NCS2 sample, and I just wanna ask could NCS2 with openvino implement face detect's common method MTCNN ? it will be grateful if any reply!

yolov3-tiny model performs strangely

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
LaptopPC-NCS2
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
x86_64
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Ubuntu1804,win10 1809
[Required] Details of the work you did before the problem occurred:

I train a yolov3-tiny model with my own dataset. The actual number of objects is 4, so I set [classes = 4, filters=27] in [yolov3-tiny.cfg].

After the training is completed, the model [yolov3-tiny.weights] is obtained. When [Darknet (.weights) -> Tensorflow (.pb)], the actual 4categories in yolov3-tiny.names are set. In [Tensorflow(.pb)->OpenVINO(.bin)_convertion], in [yolo_v3_tiny_changed.json], set ["classes": 4]. This converts the IR model and detects nothing.

But when I change [4] to [80] (in [yolov3-tiny.cfg], set [classes = 80, filters=255], in [yolov3-tiny.names], set 80 categories, In [yolo_v3_tiny_changed.json], set ["classes": 80]). The IR model thus converted has a good detection effect.
[Required] Error message:

Classes are set to 4 training models, no objects can be detected, but the same data, classes set to 80 can detect objects properly




[Required] Overview of problems and questions:

Can someone tell me why this is happening?



tiny yolo one NCS runtime error

Hi,
I'm working on Raspberry Pi3 Model B+ with a Raspbian OS and armv71 architecture.
When I run:

python3 openvino_yolov3_MultiStick_test.py -numncs 1

all works well, but when I run:

python3 openvino_tiny-yolov3_MultiStick_test.py -numncs 1

I have this error:

Process Process-1: Traceback (most recent call last): File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/dev/aing/yolo/OpenVINO-YoloV3-master/openvino_tiny-yolov3_MultiStick_test.py", line 352, in inferencer thworker = threading.Thread(target=async_infer, args=(NcsWorker(devid, frameBuffer, results, camera_width, camera_height, number_of_ncs, vidfps),)) File "/home/dev/aing/yolo/OpenVINO-YoloV3-master/openvino_tiny-yolov3_MultiStick_test.py", line 261, in __init__ self.net = IENetwork(model=self.model_xml, weights=self.model_bin) File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__ RuntimeError: segment exceeds given buffer limits. Please, validate weights file

the streaming window is open with no rectangle around the object, so no inference.
Any idea?
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.