Coder Social home page Coder Social logo

hunglc007 / tensorflow-yolov4-tflite Goto Github PK

View Code? Open in Web Editor NEW
2.2K 44.0 1.2K 197.64 MB

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite

Home Page: https://github.com/hunglc007/tensorflow-yolov4-tflite

License: MIT License

Python 55.38% Shell 0.57% Java 44.05%
yolov4 yolov3 tflite object-detection tensorflow tf2 tensorrt yolov3-tiny android

tensorflow-yolov4-tflite's Introduction

tensorflow-yolov4-tflite

license

YOLOv4, YOLOv4-tiny Implemented in Tensorflow 2.0. Convert YOLO v4, YOLOv3, YOLO tiny .weights to .pb, .tflite and trt format for tensorflow, tensorflow lite, tensorRT.

Download yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT

Prerequisites

  • Tensorflow 2.3.0rc0

Performance

Demo

# Convert darknet weights to tensorflow
## yolov4
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 

## yolov4-tiny
python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416 --input_size 416 --model yolov4 --tiny

# Run demo tensorflow
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg

python detect.py --weights ./checkpoints/yolov4-tiny-416 --size 416 --model yolov4 --image ./data/kite.jpg --tiny

If you want to run yolov3 or yolov3-tiny change --model yolov3 in command

Output

Yolov4 original weight

Yolov4 tflite int8

Convert to tflite

# Save tf model for tflite converting
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 --framework tflite

# yolov4
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416.tflite

# yolov4 quantize float16
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-fp16.tflite --quantize_mode float16

# yolov4 quantize int8
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ./coco_dataset/coco/val207.txt

# Run demo tflite model
python detect.py --weights ./checkpoints/yolov4-416.tflite --size 416 --model yolov4 --image ./data/kite.jpg --framework tflite

Yolov4 and Yolov4-tiny int8 quantization have some issues. I will try to fix that. You can try Yolov3 and Yolov3-tiny int8 quantization

Convert to TensorRT

python save_model.py --weights ./data/yolov3.weights --output ./checkpoints/yolov3.tf --input_size 416 --model yolov3
python convert_trt.py --weights ./checkpoints/yolov3.tf --quantize_mode float16 --output ./checkpoints/yolov3-trt-fp16-416

# yolov3-tiny
python save_model.py --weights ./data/yolov3-tiny.weights --output ./checkpoints/yolov3-tiny.tf --input_size 416 --tiny
python convert_trt.py --weights ./checkpoints/yolov3-tiny.tf --quantize_mode float16 --output ./checkpoints/yolov3-tiny-trt-fp16-416

# yolov4
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4.tf --input_size 416 --model yolov4
python convert_trt.py --weights ./checkpoints/yolov4.tf --quantize_mode float16 --output ./checkpoints/yolov4-trt-fp16-416

Evaluate on COCO 2017 Dataset

# run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset
# preprocess coco dataset
cd data
mkdir dataset
cd ..
cd scripts
python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl
python coco_annotation.py --coco_path ./coco 
cd ..

# evaluate yolov4 model
python evaluate.py --weights ./data/yolov4.weights
cd mAP/extra
python remove_space.py
cd ..
python main.py --output results_yolov4_tf

mAP50 on COCO 2017 Dataset

Detection 512x512 416x416 320x320
YoloV3 55.43 52.32
YoloV4 61.96 57.33

Benchmark

python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights

TensorRT performance

YoloV4 416 images/s FP32 FP16 INT8
Batch size 1 55 116
Batch size 8 70 152

Tesla P100

Detection 512x512 416x416 320x320
YoloV3 FPS 40.6 49.4 61.3
YoloV4 FPS 33.4 41.7 50.0

Tesla K80

Detection 512x512 416x416 320x320
YoloV3 FPS 10.8 12.9 17.6
YoloV4 FPS 9.6 11.7 16.0

Tesla T4

Detection 512x512 416x416 320x320
YoloV3 FPS 27.6 32.3 45.1
YoloV4 FPS 24.0 30.3 40.1

Tesla P4

Detection 512x512 416x416 320x320
YoloV3 FPS 20.2 24.2 31.2
YoloV4 FPS 16.2 20.2 26.5

Macbook Pro 15 (2.3GHz i7)

Detection 512x512 416x416 320x320
YoloV3 FPS
YoloV4 FPS

Traning your own model

# Prepare your dataset
# If you want to train from scratch:
In config.py set FISRT_STAGE_EPOCHS=0 
# Run script:
python train.py

# Transfer learning: 
python train.py --weights ./data/yolov4.weights

The training performance is not fully reproduced yet, so I recommended to use Alex's Darknet to train your own data, then convert the .weights to tensorflow or tflite.

TODO

  • Convert YOLOv4 to TensorRT
  • YOLOv4 tflite on android
  • YOLOv4 tflite on ios
  • Training code
  • Update scale xy
  • ciou
  • Mosaic data augmentation
  • Mish activation
  • yolov4 tflite version
  • yolov4 in8 tflite version for mobile

References

  • YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.
  • darknet

My project is inspired by these previous fantastic YOLOv3 implementations:

tensorflow-yolov4-tflite's People

Contributors

bessszilard avatar dragonsongohan avatar hhk7734 avatar hunglc007 avatar jzoker avatar nobilearn avatar paroque28 avatar romstriker avatar vincent7293 avatar winstonhutiger avatar wooruang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-yolov4-tflite's Issues

what is problem? aborted?

2020-05-24 20:35:58.385069: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1592] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2020-05-24 20:35:58.397870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-24 20:35:58.403026: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-05-24 20:35:58.407979: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-05-24 20:36:02.141023: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:814] Optimization results for grappler item: graph_to_optimize
2020-05-24 20:36:02.146717: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] constant_folding: Graph size after: 1356 nodes (-541), 3100 edges (-541), time = 1700.85303ms.
2020-05-24 20:36:02.154191: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] constant_folding: Graph size after: 1356 nodes (0), 3100 edges (0), time = 610.245ms.
Traceback (most recent call last):
File "convert_tflite.py", line 111, in
app.run(main)
File "C:\Users\Park\anaconda3\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\Park\anaconda3\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 106, in main
save_tflite()
File "convert_tflite.py", line 80, in save_tflite
tflite_model = converter.convert()
File "C:\Users\Park\anaconda3\lib\site-packages\tensorflow_core\lite\python\lite.py", line 464, in convert
**converter_kwargs)
File "C:\Users\Park\anaconda3\lib\site-packages\tensorflow_core\lite\python\convert.py", line 457, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "C:\Users\Park\anaconda3\lib\site-packages\tensorflow_core\lite\python\convert.py", line 203, in toco_convert_protos
raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: See console for info.
2020-05-24 20:39:16.123023: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-24 20:39:20.043565: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1121 operators, 1900 arrays (0 quantized)
2020-05-24 20:39:20.064185: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1121 operators, 1900 arrays (0 quantized)
2020-05-24 20:39:20.772842: F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:460] Check failed: input_flat_size == RequiredBufferSizeForShape(output_shape) (73008 vs. 689520)Input cannot be reshaped to requested dimensions for Reshape op with output "model/tf_op_layer_Reshape/Reshape". Are your input shapes correct?
Fatal Python error: Aborted

Current thread 0x00001250 (most recent call first):
File "c:\users\park\anaconda3\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 56 in execute
File "c:\users\park\anaconda3\lib\site-packages\absl\app.py", line 250 in run_main
File "c:\users\park\anaconda3\lib\site-packages\absl\app.py", line 299 in run
File "c:\users\park\anaconda3\lib\site-packages\tensorflow_core\python\platform\app.py", line 40 in run
File "c:\users\park\anaconda3\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 93 in main
File "C:\Users\Park\anaconda3\Scripts\toco_from_protos.exe_main
.py", line 7 in
File "c:\users\park\anaconda3\lib\runpy.py", line 85 in _run_code
File "c:\users\park\anaconda3\lib\runpy.py", line 193 in _run_module_as_main

I think custom model is not converted.

I tried normal yolov4.weights then it works well. But I want to convert my custom weights.

coco_data_path

preprocess coco dataset

cd data
mkdir dataset
cd ..
cd scripts
python coco_convert.py --input COCO_ANOTATION_DATA_PATH --output val2017.pkl
python coco_annotation.py --coco_path COCO_DATA_PATH
cd ..

so ,what is COCO_DATA_PATH, and where ?

How to set different width and height of TRAIN.INPUT_SIZE?

I tried to set height=80 width=240,
but it reports the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 2880 values, but the requested shape has 7680

How can I solve the error?

convert_tflite.py

when i get a ckpt or pb model of tensorflow,i can use this convert_tflite.py???

where is yolov4full.tflite ?

I tried to build and install this repo‘s android app, but it reports the error:
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: yolov4full.tflite
YhXZBq.png

Restoring checkpoint and total loss

Hello,
I trained from scratch a model w/ a custom dataset and ended with this :

giou_loss: 5.81   conf_loss: 3.66   prob_loss: 1.10   total_loss: 10.57

When restoring checkpoints for training, assuming training data are the same, the conf_loss increase a lot after Step2

=> STEP    1   lr: 0.001000   giou_loss: 5.81   conf_loss: 3.66   prob_loss: 1.10   total_loss: 10.57
=> STEP    2   lr: 0.000000   giou_loss: 13.02   conf_loss: 75.97   prob_loss: 10.71   total_loss: 99.71
=> STEP    3   lr: 0.000000   giou_loss: 11.97   conf_loss: 75.60   prob_loss: 10.89   total_loss: 98.46
=> STEP    4   lr: 0.000000   giou_loss: 16.61   conf_loss: 96.86   prob_loss: 14.58   total_loss: 128.05
=> STEP    5   lr: 0.000000   giou_loss: 23.43   conf_loss: 93.97   prob_loss: 16.05   total_loss: 133.45
=> STEP    6   lr: 0.000000   giou_loss: 12.04   conf_loss: 80.76   prob_loss: 10.66   total_loss: 103.46
=> STEP    7   lr: 0.000000   giou_loss: 22.63   conf_loss: 97.42   prob_loss: 16.15   total_loss: 136.20
=> STEP    8   lr: 0.000000   giou_loss: 19.34   conf_loss: 93.15   prob_loss: 12.77   total_loss: 125.27
=> STEP    9   lr: 0.000000   giou_loss: 16.54   conf_loss: 77.26   prob_loss: 15.48   total_loss: 109.28
=> STEP   10   lr: 0.000000   giou_loss: 19.21   conf_loss: 89.43   prob_loss: 17.07   total_loss: 125.71
=> STEP   11   lr: 0.000000   giou_loss: 13.00   conf_loss: 81.75   prob_loss: 9.75   total_loss: 104.50

I tried in the config.py to reset the FISRT_STAGE_EPOCHS to 20

The learning rate used for training the model from scratch was settled to :

__C.TRAIN.LR_INIT             = 1e-5
__C.TRAIN.LR_END              = 1e-6

I'm restoring as follow :

$ python3 train.py --model="yolov4" --weights="./checkpoints/yolov4"

Any advices ?

why tf-yolov4 has large different result with darknet?

this is darknet result
darknet416

and this is yolov4-tf
yolov4-tf

i use my own trained .weights to infer that image, both input_size are 416, thresh is 0.5,iou_thresh is 0.45, but i don't konw why tf-yolov4 has large different result with darknet?

here is darknet results:
darknet_result

Supporting Yolov4-tiny

Thank you @hunglc007 for this helpful repo, can you share how to use yolov4-tiny, if it's not supported are you planning to support it in near time..

How to detect with trained weights? i'm getting an error

After train in checkpoints folder i have file: yolov4.data-00001-of-00002 and trying to detect with command: python detect.py --weights=checkpoints/yolov4.data-00001-of-00002 --framework tf --size 416 --image f:/test.jpg but getting an error:

Traceback (most recent call last):
 File "detect.py", line 105, in <module>
   app.run(main)
 File "C:\Users\123\AppData\Local\Programs\Python\Python38\lib\site-packages\absl\app.py", line 299, in run
   _run_main(main, args)
 File "C:\Users\123\AppData\Local\Programs\Python\Python38\lib\site-packages\absl\app.py", line 250, in _run_main
   sys.exit(main(argv))
 File "detect.py", line 73, in main
   utils.load_weights(model, FLAGS.weights)
 File "F:\test_task\YOLOs\tensorflow-yolov4-tflite-master\core\utils.py", line 114, in load_weights
   conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 24571 into shape (24,1024,1,1)

So what i'm doing wrong?

missing core/__init__.py

From my experience, a __initi__.py file is missing into the core directory.

I am facing the issue :

Traceback (most recent call last):                                                                                                                                                                             
  File "convert_tflite.py", line 6, in <module>                                                                                                                                                                
    from core.yolov4 import YOLOv4, YOLOv3, YOLOv3_tiny, decode                                                                                                                                                
ImportError: No module named core.yolov4

Feed non squared input images to YOLOv4 model

Hi,

I am facing some shape issues while trying to feed the network with non squared input sizes. For a YOLOv3 model (not from your repo), this worked if the input size is a multiple of 32 for height and width (e.g. 1024x544). For the YOLOv4 model, I can build the graph but the inference does not work by throwing a shape error on a reshape layer. Are there any limitations regarding the model input shape for YOLOv4?

Error message:
Input to reshape is a tensor with 2219520 values, but the requested shape has 1179120 [[{{node model/tf_op_layer_Reshape/Reshape}}]]

Thanks for you help.

the loss is nan

Hi,
when i run python train.py on coco data, follow your tutorial exactly:
i get :
=> STEP 49 lr: 0.000020 giou_loss: 9.46 conf_loss: 42.83 prob_loss: 6.38 total_loss: 58.67
=> STEP 50 lr: 0.000020 giou_loss: nan conf_loss: nan prob_loss: nan total_loss: nan
=> STEP 51 lr: 0.000021 giou_loss: nan conf_loss: nan prob_loss: nan total_loss: nan
=> STEP 52 lr: 0.000021 giou_loss: nan conf_loss: nan prob_loss: nan total_loss: nan
All parameters I use are default。

in addition, I think the nine line of main function in train.py should be :
steps_per_epoch = len(trainset) // cfg.TRAIN.BATCH_SIZE,

Your code is:
steps_per_epoch = len(trainset)

I am a beginner of computer vision, I don't know if my understanding is wrong ?

Full Integer Quantization Not Working

I tried to get the full int8 quantization by running convert_tflite.py and setting the flag --quantize_mode full_int8. However, I got the following error:

RuntimeError: Quantization not yet supported for op: RESIZE_NEAREST_NEIGHBOR

I gave it a representative dataset and I have all the requirements installed. Has anyone else been able to do a full int8 quantization of yolo? Thank you!

integrating .tflite to android

Hello there,
Your project is fine and i have converted .tflite version now my question is how can i integrate .tflite file to android?

Thanks

transfer learning broken using weights from google drive

train.py with --weights=./yolov4.weights cause an assert error under utils.load_weights If assert len(wf.read()) == 0, 'failed to read all data' is commented the training start with nan

=> STEP   63   lr: 0.000027   giou_loss:  nan   conf_loss:  nan   prob_loss:  nan   total_loss:  nan
=> STEP   64   lr: 0.000028   giou_loss:  nan   conf_loss:  nan   prob_loss:  nan   total_loss:  nan

Weird results with Unity Emgu TFLite

Hi,

I'm newbie in the ML world. I'm triying to use your yolov4.tflite with Unity EmguTFLite library, but I'm getting weird results. After many hours wasted, I come here to ask for help.

My input is (the dog-bicycle-car example image):

        NativeImageIO.ReadTensorFromTexture2D<byte>(
            texture2D,                  // input image as Texture2D Unity format
            _inputTensor.DataPointer,   // dest
            416,                        // inputHeigh
            416,                        // inputWidth
            0.0f,                       // inputMean. What's this? leaving default 0
            0.003921f,                  // scale 1/255
            true,                       // flipUpsideDown (I tried both)
            true);                      // swapBR (I tried both)

and I get as output 3 arrays with these sizes:
1x52x52x3x85
1x26x26x3x85
1x13x13x3x85

that sounds good according to the YOLO specifications:
? x GridX x GridY x NumBoundingBoxes x Scores

and each score[85] array should have:
position 0 -> 4 [probability, x, y, width, height] or [x, y, width, height, probability] not sure
position 5 -> 85 score value for each class
and the confidence should be: confidence = probability x max_score

All these values between 0 and 1. Is this right?

But the results I'm getting are over 1 or below 0, and this is weird. For example, the scores[85] array at any GridX,GridY position returns this kind of values:

  | [0] | 2.329916 | float
  | [1] | -0.7739916 | float
  | [2] | -2.228415 | float
  | [3] | 0.07396033 | float
  | [4] | 3.220351E-08 | float
  | [5] | 0.9996939 | float
  | [6] | 5.228583E-07 | float
  | [7] | 0.0009952839 | float
  | [8] | 2.176867E-05 | float
  | [9] | 0.0001735534 | float
  | [10] | 4.656771E-06 | float

imagen

What am I missing? What am I doing wrong?

Use VOC data sets for training? Or use our own data sets for training? Not the coco dataset.

Has anyone changed yolov4.weights to .tf or .ckpt type suitable for tensorflow training?

When I want to train with VOC datasets, I can't use yolov4.weights.

Similarly, I can't use yolov4.weights when I want to train my own data. Because none of them have 80 kinds of objects like coco datasets.

The author of this code base is already very good, is there anyone who can improve this code? Make it applicable to any dataset. Because I am a rookie, so I do not have the ability to modify it.

Thank you to the author and everyone.

TF 2.20 - errs out due to "using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function. "

Hi - I'm running the convert on TF 2.20 and encounter an error regarding using a tf.Tensor as bool - I'll test on 2.1 next but wanted to put this error out now since more people will be using 2.20 soon.

C:\Users\lessw\tf2yolo4>python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4.tflite 2020-05-05 09:55:27.109218: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-05-05 09:55:28.977261: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-05-05 09:55:29.034033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5 coreClock: 1.125GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s 2020-05-05 09:55:29.037768: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-05-05 09:55:29.041449: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-05-05 09:55:29.045138: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-05-05 09:55:29.047094: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-05-05 09:55:29.050725: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-05-05 09:55:29.054087: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-05-05 09:55:29.059581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-05-05 09:55:29.062256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0 2020-05-05 09:55:29.063914: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-05-05 09:55:29.074065: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1b8334c57a0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-05-05 09:55:29.076775: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-05-05 09:55:29.078977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5 coreClock: 1.125GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s 2020-05-05 09:55:29.082237: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-05-05 09:55:29.084764: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-05-05 09:55:29.086438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-05-05 09:55:29.088690: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-05-05 09:55:29.090423: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-05-05 09:55:29.092110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-05-05 09:55:29.093784: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-05-05 09:55:29.095770: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0 2020-05-05 09:55:29.614514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-05-05 09:55:29.616276: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091] 0 2020-05-05 09:55:29.617448: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0: N 2020-05-05 09:55:29.619011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6230 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5) 2020-05-05 09:55:29.624471: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1b85ebd15f0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-05-05 09:55:29.626731: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2070, Compute Capability 7.5 WARNING:tensorflow:AutoGraph could not transform <bound method BatchNormalization.call of <core.common.BatchNormalization object at 0x000001B833E27908>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: 'arguments' object has no attribute 'posonlyargs' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert W0505 09:55:29.957932 21456 ag_logging.py:146] AutoGraph could not transform <bound method BatchNormalization.call of <core.common.BatchNormalization object at 0x000001B833E27908>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Traceback (most recent call last):
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 578, in converted_call
converted_f = conversion.convert(target_entity, program_ctx)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py", line 101, in convert
entity, program_ctx.options, program_ctx, custom_vars)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 412, in transform_function
extra_locals)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 373, in _transformed_factory
nodes, ctx = self._transform_function(fn, user_context)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 339, in _transform_function
node = self.transform_ast(node, context)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py", line 61, in transform_ast
node = converter.standard_analysis(node, ctx, is_initial=True)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\core\converter.py", line 355, in standard_analysis
node = activity.resolve(node, context, None)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 685, in resolve
return ActivityAnalyzer(context, parent_scope).visit(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transformer.py", line 436, in visit
result = super(Base, self).visit(node)
File "C:\Users\lessw\anaconda3\lib\ast.py", line 271, in visit
return visitor(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 569, in visit_FunctionDef
node = self._visit_arg_annotations(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 545, in _visit_arg_annotations
node = self._visit_arg_declarations(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 550, in _visit_arg_declarations
node.args.posonlyargs = self._visit_node_list(node.args.posonlyargs)
AttributeError: 'arguments' object has no attribute 'posonlyargs'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py", line 943, in call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 259, in wrapper
return converted_call(f, args, kwargs, options=options)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 585, in converted_call
return _fall_back_unconverted(f, args, kwargs, options, e)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 393, in _fall_back_unconverted
return _call_unconverted(f, args, kwargs, options)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 343, in _call_unconverted
return f(*args, **kwargs)
File "C:\Users\lessw\tf2yolo4\core\common.py", line 14, in call
if not training:
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 926, in bool
self._disallow_bool_casting()
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 539, in _disallow_bool_casting
self._disallow_in_graph_mode("using a tf.Tensor as a Python bool")
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 528, in _disallow_in_graph_mode
" this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "convert_tflite.py", line 109, in
app.run(main)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 104, in main
save_tflite()
File "convert_tflite.py", line 53, in save_tflite
feature_maps = YOLOv4(input_layer, NUM_CLASS)
File "C:\Users\lessw\tf2yolo4\core\yolov4.py", line 60, in YOLOv4
route_1, route_2, conv = backbone.cspdarknet53(input_layer)
File "C:\Users\lessw\tf2yolo4\core\backbone.py", line 41, in cspdarknet53
input_data = common.convolutional(input_data, (3, 3, 3, 32), activate_type="mish")
File "C:\Users\lessw\tf2yolo4\core\common.py", line 33, in convolutional
if bn: conv = BatchNormalization()(conv)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py", line 955, in call
str(e) + '\n"""')
TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass dynamic=True to the class constructor.
Encountered error:
"""
using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
"""

C:\Users\lessw\tf2yolo4>
C:\Users\lessw\tf2yolo4>python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode float16
2020-05-05 09:55:50.534712: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 09:55:52.407068: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-05-05 09:55:52.464449: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5
coreClock: 1.125GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s
2020-05-05 09:55:52.468171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 09:55:52.471904: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-05-05 09:55:52.475932: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-05-05 09:55:52.477943: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-05-05 09:55:52.481599: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-05-05 09:55:52.484848: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-05-05 09:55:52.490466: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-05-05 09:55:52.493474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-05-05 09:55:52.494907: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-05-05 09:55:52.504471: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1f0a58c8620 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-05 09:55:52.506869: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-05-05 09:55:52.508837: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1544] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5
coreClock: 1.125GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s
2020-05-05 09:55:52.512381: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 09:55:52.514151: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-05-05 09:55:52.515867: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-05-05 09:55:52.517476: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-05-05 09:55:52.519616: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-05-05 09:55:52.521364: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-05-05 09:55:52.523083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-05-05 09:55:52.525053: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1686] Adding visible gpu devices: 0
2020-05-05 09:55:53.033268: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1085] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-05 09:55:53.035123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1091] 0
2020-05-05 09:55:53.036180: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1104] 0: N
2020-05-05 09:55:53.037673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6230 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5)
2020-05-05 09:55:53.042743: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1f0d16ea700 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-05-05 09:55:53.044876: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2070, Compute Capability 7.5
WARNING:tensorflow:AutoGraph could not transform <bound method BatchNormalization.call of <core.common.BatchNormalization object at 0x000001F0A66A7BC8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
W0505 09:55:53.370318 18732 ag_logging.py:146] AutoGraph could not transform <bound method BatchNormalization.call of <core.common.BatchNormalization object at 0x000001F0A66A7BC8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Traceback (most recent call last):
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 578, in converted_call
converted_f = conversion.convert(target_entity, program_ctx)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py", line 101, in convert
entity, program_ctx.options, program_ctx, custom_vars)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 412, in transform_function
extra_locals)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 373, in _transformed_factory
nodes, ctx = self._transform_function(fn, user_context)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transpiler.py", line 339, in _transform_function
node = self.transform_ast(node, context)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py", line 61, in transform_ast
node = converter.standard_analysis(node, ctx, is_initial=True)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\core\converter.py", line 355, in standard_analysis
node = activity.resolve(node, context, None)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 685, in resolve
return ActivityAnalyzer(context, parent_scope).visit(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transformer.py", line 436, in visit
result = super(Base, self).visit(node)
File "C:\Users\lessw\anaconda3\lib\ast.py", line 271, in visit
return visitor(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 569, in visit_FunctionDef
node = self._visit_arg_annotations(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 545, in _visit_arg_annotations
node = self._visit_arg_declarations(node)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py", line 550, in _visit_arg_declarations
node.args.posonlyargs = self._visit_node_list(node.args.posonlyargs)
AttributeError: 'arguments' object has no attribute 'posonlyargs'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py", line 943, in call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 259, in wrapper
return converted_call(f, args, kwargs, options=options)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 585, in converted_call
return _fall_back_unconverted(f, args, kwargs, options, e)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 393, in _fall_back_unconverted
return _call_unconverted(f, args, kwargs, options)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py", line 343, in _call_unconverted
return f(*args, **kwargs)
File "C:\Users\lessw\tf2yolo4\core\common.py", line 14, in call
if not training:
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 926, in bool
self._disallow_bool_casting()
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 539, in _disallow_bool_casting
self._disallow_in_graph_mode("using a tf.Tensor as a Python bool")
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py", line 528, in _disallow_in_graph_mode
" this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "convert_tflite.py", line 109, in
app.run(main)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 104, in main
save_tflite()
File "convert_tflite.py", line 53, in save_tflite
feature_maps = YOLOv4(input_layer, NUM_CLASS)
File "C:\Users\lessw\tf2yolo4\core\yolov4.py", line 60, in YOLOv4
route_1, route_2, conv = backbone.cspdarknet53(input_layer)
File "C:\Users\lessw\tf2yolo4\core\backbone.py", line 41, in cspdarknet53
input_data = common.convolutional(input_data, (3, 3, 3, 32), activate_type="mish")
File "C:\Users\lessw\tf2yolo4\core\common.py", line 33, in convolutional
if bn: conv = BatchNormalization()(conv)
File "C:\Users\lessw\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py", line 955, in call
str(e) + '\n"""')
TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass dynamic=True to the class constructor.
Encountered error:
"""
using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
"""`

Test the .pb format file

@when i test the .pb format file using the following code:
with tf.Session() as sess:
with tf.gfile.FastGFile("saved_model.pb",'rb') as f:
graph_def = tf.GraphDef()
tf.Graph.as_graph_def()
graph_def.ParseFromString(f.read())
g_in=tf.import_graph_def(graph_def)
LOGDIR='/log'
train_writer=tf.summary.FileWriter(LOGDIR)
train_writer.add_graph(sess.graph)
@it gives following error :
File "testing.py", line 7, in
graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
@how to solve this problem? Thanks

convert_tflite.py crash on default dataset

TF-gpu 2.1.0 + all required libraries
CUDA 10.1
Downloaded yolov4.weights from the link

Script command:
python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4.tflite

Crash:
FileNotFoundError: [Errno 2] No such file or directory: '/media/user/Source/Data/coco_dataset/coco/5k.txt'

Looks like explicitly setting --quantize_mode works around this logic bug.

Thanks,
Rob

How

Tensorrt has been improved. How can I test the effect?

Use Softplus in place of pure baseline for Mish.

Hi.
Noticing your implementation of Mish, I would suggest to use Softplus implementation instead of writing the inner function to be log(1+exp(x)).
Softplus provides a threshold which is much more stable and avoids gradient overflow.

转换通过darknet框架训练自有数据集后的.weights文件,出现错误

你好,通过指令:python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4-fp16.tflite --quantize_mode float16
我想把一个在darknet框架下训练得到的.weights权重文件转为tflite文件,但是出现了如下问题:
ValueError: cannot reshape array of size 4559937 into shape (1024,512,3,3)
具体报错内容如下

Traceback (most recent call last):
  File "convert_tflite.py", line 109, in <module>
    app.run(main)
  File "E:\anaconda\envs\python-train\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "E:\anaconda\envs\python-train\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "convert_tflite.py", line 104, in main
    save_tflite()
  File "convert_tflite.py", line 59, in save_tflite
    utils.load_weights(model, FLAGS.weights)
  File "E:\11yolov4\tensorflow-yolov4-tflite\tensorflow-yolov4-tflite-master\core\utils.py", line 114, in load_weights
    conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 4559937 into shape (1024,512,3,3)

请问如何可以解决这个问题?

What is 'conv' and 'pred' relatively?

Hello,

I'm trying to run your code on my own dataset. I feel confused of this part:

        with tf.GradientTape() as tape:
            pred_result = model(image_data, training=True)
            giou_loss = conf_loss = prob_loss = 0

            # optimizing process
            for i in range(3):
                conv, pred = pred_result[i * 2], pred_result[i * 2 + 1]
                # loss_items = compute_loss(pred, conv, target[i][0], target[i][1], STRIDES=STRIDES, NUM_CLASS=NUM_CLASS, IOU_LOSS_THRESH=IOU_LOSS_THRESH, i=i)
                loss_items = compute_loss(pred, conv, *target[i], i)
                giou_loss += loss_items[0]
                conf_loss += loss_items[1]
                prob_loss += loss_items[2]

Here, I think pred_result will definitely be a list of length 3. But from i*2 in the for loop, it seems like it is of length 6. How can that be?

I hope to ask this because I meet this issue shown below.

  File "/opt/project/train.py", line 167, in <module>
    app.run(main)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/opt/project/train.py", line 160, in main
    train_step(image_data, target)
  File "/opt/project/train.py", line 96, in train_step
    loss_items = compute_loss(pred, conv, *target[i], i)
  File "/opt/project/core/yolov4.py", line 272, in compute_loss
    giou = tf.expand_dims(bbox_giou(pred_xywh, label_xywh), axis=-1)
  File "/opt/project/core/yolov4.py", line 238, in bbox_giou
    left_up = tf.maximum(boxes1[..., :2], boxes2[..., :2])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 5736, in maximum
    _ops.raise_from_not_ok_status(e, name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,26,26,3,2] vs. [2,52,52,3,2] [Op:Maximum]

Process finished with exit code 1

Appreciate any thinking or help!

Anchors for yolov4

Foud answer for this question:
Can you please explain how you chose the anchor sizes for yolov4? Are they optimal for tflite or is it optimal for general yolov4 architecture?
Answer : These are the preloaded anchors. (12,16, 19,36, 40,28, 36,75, 76,55, 72,146, 142,110, 192,243, 459,401)

weights array size does not match?

when I try to convert , this error happens :

core/utis.py , line 114
conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 4629942 into shape (1024,512,3,3)

thanks in advance!

How to convert to .pb?

It says it can ". Convert YOLO v4, YOLOv3, YOLO tiny .weights to .pb," but has no example anywhere

Failed to convert to tensorrt

I follow the readme.md and try to concert yolov4.weight to tensorrt, but failed.

My environment : tensorrt7.0, cuda10.2, tensorflow2.1

log:
2020-05-15 08:34:10.916918: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-05-15 08:34:10.944378: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3299990000 Hz
2020-05-15 08:34:10.945980: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f5540000b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-15 08:34:10.946030: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-05-15 08:34:10.950932: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-15 08:34:11.537781: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4632960 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-05-15 08:34:11.537835: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-05-15 08:34:11.537852: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-05-15 08:34:11.537867: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-05-15 08:34:11.537881: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (3): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-05-15 08:34:11.540423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:19:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2020-05-15 08:34:11.541094: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties:
pciBusID: 0000:1a:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2020-05-15 08:34:11.541749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 2 with properties:
pciBusID: 0000:67:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2020-05-15 08:34:11.542381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 3 with properties:
pciBusID: 0000:68:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2020-05-15 08:34:11.542631: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64::/usr/loacl/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/loacl/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2020-05-15 08:34:11.544150: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-05-15 08:34:11.545595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-05-15 08:34:11.545862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-05-15 08:34:11.547458: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-05-15 08:34:11.548321: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-05-15 08:34:11.551536: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-15 08:34:11.551555: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1598] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2020-05-15 08:34:11.551776: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-15 08:34:11.551785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 1 2 3
2020-05-15 08:34:11.551791: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N N N N
2020-05-15 08:34:11.551797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 1: N N N N
2020-05-15 08:34:11.551801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 2: N N N N
2020-05-15 08:34:11.551806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 3: N N N N
2020-05-15 08:34:11.556629: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64::/usr/loacl/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/loacl/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2020-05-15 08:34:11.556645: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Fatal Python error: Aborted

Current thread 0x00007f563ea67740 (most recent call first):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 259 in _check_trt_version_compatibility
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 998 in init
File "convert_trt.py", line 51 in save_trt
File "convert_trt.py", line 87 in main
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250 in _run_main
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299 in run
File "convert_trt.py", line 91 in
Aborted (core dumped)

Inference on saved Tensorflow model

I converted the weights using save_model.py and got the final model.
My question is how to load it via Keras/Tensorflow to do inference?

model = tf.saved_model.load(str(model_dir), tags=['serve'])
model = model.signatures['serving_default']

resized_rgb_image = resized_rgb_image.astype(np.float32)
input_image = np.expand_dims(resized_rgb_image, axis=0)
input_tensor = tf.convert_to_tensor(input_image)
output_dict = model(input_tensor)

I get tensorflow.python.framework.errors_impl.FailedPreconditionError

tensorflow.python.framework.errors_impl.FailedPreconditionError:  Error while reading resource variable batch_normalization_56/moving_mean_60226 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/batch_normalization_56/moving_mean_60226/class tensorflow::Var does not exist.
	 [[{{node StatefulPartitionedCall/model_1/batch_normalization_56/FusedBatchNormV3/ReadVariableOp}}]] [Op:__inference_signature_wrapper_8573]

intersect-gt-and-pred.py

Hi,
I follow your tutorial exactly, and when I run the main.py in mAP folder like below:
python main.py --output results_yolov4_tf,
I get:
Error. File not found: predicted/20.txt
You can avoid this error message by running extra/intersect-gt-and-pred.py

but there is no intersect-gt-and-pred.py file in extra.

I wonder where I'm wrong

GPU usage

Thx for your great work and sharing.

I am currently using your code training AOC dataset.

but the usage of gpu is quite low, always below 50% using single Titan Xp.

  1. Will you improve it?
  2. Will you add more backbone such as mobilenet?
  3. Will it support multi-gpus training?

Many thx for your open-sourcing.

best regards.
顺颂业祺。

Tensorboard log and predicted images

Instead to only get the predicted images on the filesystem during the evaluation, this should be great to have the predictions also through Tensorboard, under the images/tab, no ?

Easy fix :
An images list []

and then something like :

with writer.as_default():
        images_2 = np.reshape(images[0:1000], (-1, 320, 320, 3))
        tf.summary.image("image data examples", images_2, max_outputs=1000, step=FLAGS.step)

Pb, keep track of the step : suggestion : pass as an external flag.

If you wish I can prepare a PR for that.

failed to read all data

Hi
I get 'failed to read all data' error in utlis.py, line 122:
assert len(wf.read()) == 0, 'failed to read all data'
In my case, len(wf.read()) equals 257717640 (257 MB) and it seems ok, why it should be 0?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.