Coder Social home page Coder Social logo

pinto0309 / pinto_model_zoo Goto Github PK

View Code? Open in Web Editor NEW
3.4K 111.0 558.0 183 MB

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.

Home Page: https://qiita.com/PINTO

License: MIT License

Shell 15.95% Python 83.50% C++ 0.24% Dockerfile 0.02% Jupyter Notebook 0.30%
tensorflow tensorflow-lite openvino edgetpu mediapipe coreml tensorflowjs tf-trt onnx pytorch

pinto_model_zoo's Issues

BlazeFace confidence score

Excellent translation of models! But how can I get the correct confidence score?
My code

def load_graph(frozen_graph_filename):
    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
            serialized_graph = f.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')

    return detection_graph

detection_face = load_graph('face_detection_front.pb')

img_inputs_face = detection_face.get_tensor_by_name('input:0')


classificators = detection_face.get_tensor_by_name('classificators:0')
regressors = detection_face.get_tensor_by_name('regressors:0')

img = cv2.imread('2-1.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (128, 128))
image_np_expanded = np.expand_dims(img, axis=0)

(pred_classificators , pred_regressors) = persistent_sess.run([classificators , regressors],
        feed_dict={img_inputs_face: image_np_expanded})

I get very strange pred_classificators

[[-1.16985550e+01],
       [-9.47585297e+00],
       [-9.12463665e+00],
       [-5.20070457e+00],
       [-1.55940714e+01],
       [-1.01529875e+01],
       [-1.26312046e+01],
       [-4.64342594e+00],
       [-1.42407932e+01],
       [-7.08438396e+00],
       [-1.52337151e+01],
       .....]

model conversion failed

I tried to convert .pb file to .tflite file.
I run "06_mobilenetv2-ssdlite/01_coco/01_float32/03_integer_quantization_with_postprocess.py" files and get this error

ValueError: Invalid tensors 'TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3' were found.

I've run the script to download tflite_graph.pb.
could you please check out the problem for me?
Also can you tell me where you get tflite_graph.pb?
can I convert model zoo ssdlite_mobilenet_v2_coco by your code?
Thanks!

BlazePose

why not translate 'pose_landmark_upper_body.tflite' to pb,'pose_landmark_upper_body_256x256_float32' is not the standard model in google project.Can you translate'pose_landmark_upper_body.tflite' to pb? thanks a lot

Can not load saved BlazePose model.

model = keras.models.load_model('saved_model_pose_detection')


TypeError Traceback (most recent call last)
in
----> 1 model = keras.models.load_model('saved_model_pose_detection')

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\save.py in load_model(filepath, custom_objects, compile)
188 if isinstance(filepath, six.string_types):
189 loader_impl.parse_saved_model(filepath)
--> 190 return saved_model_load.load(filepath, compile)
191
192 raise IOError(

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in load(path, compile)
114 # TODO(kathywu): Add saving/loading of optimizer, compiled losses and metrics.
115 # TODO(kathywu): Add code to load from objects that contain all endpoints
--> 116 model = tf_load.load_internal(path, loader_cls=KerasObjectLoader)
117
118 # pylint: disable=protected-access

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\saved_model\load.py in load_internal(export_dir, tags, loader_cls)
602 loader = loader_cls(object_graph_proto,
603 saved_model_proto,
--> 604 export_dir)
605 root = loader.get(0)
606 if isinstance(loader, Loader):

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in init(self, *args, **kwargs)
186 self._models_to_reconstruct = []
187
--> 188 super(KerasObjectLoader, self).init(*args, **kwargs)
189
190 # Now that the node object has been fully loaded, and the checkpoint has

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\saved_model\load.py in init(self, object_graph_proto, saved_model_proto, export_dir)
121 self._concrete_functions[name] = _WrapperFunction(concrete_function)
122
--> 123 self._load_all()
124 self._restore_checkpoint()
125

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in _load_all(self)
207 # loaded from config may create variables / other objects during
208 # initialization. These are recorded in _nodes_recreated_from_config.
--> 209 self._layer_nodes = self._load_layers()
210
211 # Load all other nodes and functions.

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in _load_layers(self)
307 continue
308
--> 309 layers[node_id] = self._load_layer(proto.user_object, node_id)
310
311 for node_id, proto in metric_list:

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in _load_layer(self, proto, node_id)
333 # Detect whether this object can be revived from the config. If not, then
334 # revive from the SavedModel instead.
--> 335 obj, setter = self._revive_from_config(proto.identifier, metadata, node_id)
336 if obj is None:
337 obj, setter = revive_custom_object(proto.identifier, metadata)

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in _revive_from_config(self, identifier, metadata, node_id)
351 obj = (
352 self._revive_graph_network(metadata, node_id) or
--> 353 self._revive_layer_from_config(metadata, node_id))
354
355 if obj is None:

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py in _revive_layer_from_config(self, metadata, node_id)
406 try:
407 obj = layers_module.deserialize(
--> 408 generic_utils.serialize_keras_class_and_config(class_name, config))
409 except ValueError:
410 return None

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\layers\serialization.py in deserialize(config, custom_objects)
107 module_objects=globs,
108 custom_objects=custom_objects,
--> 109 printable_module_name='layer')

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
373 list(custom_objects.items())))
374 with CustomObjectScope(custom_objects):
--> 375 return cls.from_config(cls_config)
376 else:
377 # Then cls may be a function returning a class.

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in from_config(cls, config)
653 A layer instance.
654 """
--> 655 return cls(**config)
656
657 def compute_output_shape(self, input_shape):

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\layers\convolutional.py in init(self, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs)
597 kernel_constraint=constraints.get(kernel_constraint),
598 bias_constraint=constraints.get(bias_constraint),
--> 599 **kwargs)
600
601

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\layers\convolutional.py in init(self, rank, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, trainable, name, **kwargs)
123 name=name,
124 activity_regularizer=regularizers.get(activity_regularizer),
--> 125 **kwargs)
126 self.rank = rank
127 if filters is not None and not isinstance(filters, int):

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in init(self, trainable, name, dtype, dynamic, **kwargs)
292 }
293 # Validate optional keyword arguments.
--> 294 generic_utils.validate_kwargs(kwargs, allowed_kwargs)
295
296 # Mutable properties

~\miniconda3\envs\py36\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
790 for kwarg in kwargs:
791 if kwarg not in allowed_kwargs:
--> 792 raise TypeError(error_message, kwarg)
793
794

TypeError: ('Keyword argument not understood:', 'groups')

deeplab cityscape edgetpu

Hi! Thank you for your work, it helps me a lot!
Now I am looking for deeplab model with cityscape pretrained, optimized for edgetpu (edgetpu.tflite)
Trying to build it by myself, but still no luck. I see you have a cityscape quant 257 and 769 - it would be ideally fit my use case. I'm trying to convert in in edgetpu.tflite file, but "Model not quantized".
If you can help me it would be awesome!

DeepLabv3+ and OpenVINO

Hi @PINTO0309 ! Thanks for this amazing repo. Congrats!.

I'm looking for a way to run DeepLabv3+ with OpenVINO. I see that in your repository you have several folders related to this model. It seems that the one that best fits my needs is Sample.4 - Semantic Segmentation, DeeplabV3-plus 256x256. But I would like to do the development simply by calling the executable files and the XML, as in the typical OpenVINO examples. This example requires TensorFlow installation. What about OpenVINO-bin? I did not find a lot of documentation about this latter one.
I am using RPi4 and a OAK-D camera.

Thanks for your help.

EfficientDet INT8 Quantization

Hi @PINTO0309,

Did you manage to fully quantize the EfficientDet models? I saw there are some quantized models in your zoo: https://github.com/PINTO0309/PINTO_model_zoo/tree/master/18_EfficientDet/03_integer_quantization, but they look more like only weight quantized, right?
I think you didn't add:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
to limit the operators to be INT8?

I tried myself to fully quantize EfficientDet, and I failed with TF1.15.0, TF2.2.0 and TF2.3.0.

If you've managed to quantize the model, could you give me some enlightenment on this?

Thank you!

Model conversion error

After running download.sh, I am trying to convert face_detection_front.pb to quantized tflite model. I have tried all scripts inside 30_BlazeFace/01_float32 directory but all gets failed with the following error:

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

I am using TensorFlow 2.2.0 on MacOS. Also, tried with a Linux machine with 2.1.0 and 2.2.0.

NOTE: I am trying to rebuild quantized model to run on microcontroller. The quantized model for the blazeface provided in your repo download.sh has error while running on the microcontroller: "Didn't find op for builtin opcode 'CONV_2D' version '3'" or "Didn't find op for builtin opcode 'QUANTIZE' version '2'".

BlazeFace - How to use?

Anyhow knows how to use BlazeFace ?

Models published here return an array of 4 tensors as output:

  tf_op_layer_classificators_1: dtype: float32, shape: [1, 512, 1]
  tf_op_layer_classificators_2: dtype: float32, shape: [1, 384, 1]
  tf_op_layer_regressors_1: dtype: float32, shape: [1, 512, 16]
  tf_op_layer_regressors_2: dtype: float32, shape: [1, 384, 16]

While a model published on TFHub
and used in their example app at https://www.npmjs.com/package/@tensorflow-models/blazeface returns a single tensor:

  Identity: dtype: float32, shape: [1, 896, 17]

Why? I already have a working app using model from TFHub, but I'd like to use BlazeFace back-model as it's 4x resolution and much better with smaller faces and only model published on TFHub is front-model.

033_Hand_Detection_and_Tracking : handedness ?

Congratulations and thank you for the great job you are doing !

I have downloaded the Openvino models from your repository (https://github.com/PINTO0309/PINTO_model_zoo/blob/master/033_Hand_Detection_and_Tracking/07_openvino/download.sh), and I am writing some python code to use them with Openvino.
Palm detection model is working fine. But for the hand landmarks model, I don't find the handedness output described in the google paper MediaPipe Hands: On-device Real-time Hand Tracking (https://arxiv.org/pdf/2006.10214.pdf).
When I use Netron on the current model from mediapipe repo (https://github.com/google/mediapipe/blob/master/mediapipe/models/hand_landmark.tflite), I can see the output named 'output_handedness', which does not exist in the model from your repo.
Is it because google published several versions of this model and you are using an older version ?
If yes, do you know if the improvements in the last version are worth using it and if you plan to convert it ?

Thanks !

Object Detection Model Based on Open Image Dataset For Coral Tpu?

Would you be able to convert any of the object detection models based on open image dataset below for Coral Edge Tpu?

  • faster_rcnn_inception_resnet_v2_atrous_oidv4
  • ssd_mobilenetv2_oidv4
  • ssd_resnet_101_fpn_oidv4

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

I tried to convert the models, but couldn't convert them.

from tensorflow.compat.v1.lite import TFLiteConverter
import tensorflow as tf
import cv2
from glob import glob

def represent():
	for img in glob('oid/*'):
		x = cv2.imread(img)
		x = cv2.resize(x, (300,300))
		x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
		x = x/255
		yield [x]

dir = 'models/ssd_mobilenet_v2_oid_v4_2018_12_12/saved_model'
converter = TFLiteConverter.from_saved_model(dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = represent
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

How to use U^2-net in Tensorflow?

I am trying to use the u2netp_480x640_float32.pb model in Tensorflow 2.3.1 (through the Java bindings but that should not make much of a difference). However I am not sure if my use of that model is incorrect or maybe something else is wrong.

What I do: load a 640x480 RGB image into a [1, 480, 640, 3] tensor and I scale the values between -1 and 1. I feed the tensor to the inputs layer and I fetch from the Identity layer, which is [1, 480, 640, 1]

What I get:
U2Net01-2020-10-29-19 14 19

I also tried to scale values between 0-1, 0-255, -127-127. The latter gives me:
U2Net01-2020-10-29-19 09 30

But that is not what I expect U^2-net to do either.

Am I missing something, or am I reading from the wrong layers?

Thanks

Unhandled "Dequantize" operation in new MediaPipe BlazeFace model

I tried to run your BlazeFace conversion script on the most recent BlazeFace model in MediaPipe. I noticed that the dequantize operation (builtin_code = 6) was not handled in your script. I tried to edit the script to handle this, but I am not at all familiar with TensorFlow, so I am not sure what this operation maps to. Would you mind provide some advice? Thank you in advance!

(I may have used the wrong FlatBuffer definitions so the following operator_codes json might not look right. )

Model: https://github.com/google/mediapipe/blob/f15da632dec186f2c1d3c780f47086477e2286a9/mediapipe/models/face_detection_front.tflite

...
  "operator_codes": [
    {
      "deprecated_builtin_code": 3,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 19,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 4,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 0,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 34,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 17,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 22,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 2,
      "version": 1,
      "builtin_code": "ADD"
    },
    {
      "deprecated_builtin_code": 6,
      "version": 2,
      "builtin_code": "ADD"
    }
  ],
...

Updates for 33_Hand_Detection_and_Tracking

It seems google has improved the hand landmark model, the announcement is here , so maybe you might want to revisit it and update the models.

Also, I've noticed there's also updates for BlazeFace, where it seems there's two models, one for front facing cameras (selfie camera) and another for back facing (main) cameras.

Models seem to have been updated 2 months ago here.

BlazePose frozen model

Sorry to bother you again with this... I guess you're already working on converting the BlazePose TFLite models to Frozen model as long as saved model?

I've successfully ran the TfLite models, and even in desktop and using only CPU the models are very fast and responsive, it's definitely a much better solution than the PoseNet models.

My understanding is that PoseNet models are better suited for one shot detection, whereas BlazePose is better for continuous tracking, which makes it very good for realtime applications.

Trying to understand output of 25 head pose estimator

Hi again!

I am trying to use 25 head pose estimator; I'be been able to load and run the frozen_inference_graph.pb model, but I don't know how to decode the tensor output.

The model has an image input of 128x128 pixels.... and outputs an array of 136 float values.

The python code seems to imply there's several output tensors: scores, boxes, etc.... but I only see this 136 float output tensor.

So, how do I decode it?

As a side note: I am looking for a fast way of detecting faces on an image, at distances of up to 5 metres, so BlazeFace is not suitable for me. I don't need landmarks or any other kind of information, just face rectangles on an image.... given you have a collection of face detection models, which one do you think is the best for that task?

Thanks in advance!

Mobilenet V3 + SSD

Hey there,
First of all, thank you for sharing your work.

I'm trying to train MobilenetV3_SSD on my own dataset and then get the tflite.
However, I cannot use TF Object Detection API for quantization aware training.
Seems like there are some issues with their published checkpoints.
I found people having the same problem on Github. here and here.

However, it looks like you were able to fix it. May I ask what method or modification did you use?

Thank you in advance.

Segmenetation fault while training deeplabV3 on cityscapes

Hi
I am following your tutorial for training deeplab, can you help me with this segmentation fault, please.
here is the command I'm using for training:

python3 train.py \
    --logtostderr \
    --training_number_of_steps=5000 \
    --train_split="train_fine" \
    --model_variant="mobilenet_v3_small_seg" \
    --decoder_output_stride=16 \
    --train_crop_size="513,513" \
    --train_batch_size=2 \
    --dataset="cityscapes" \
    --save_interval_secs=300 \
    --save_summaries_secs=300 \
    --save_summaries_images=False \
    --log_steps=100 \
    --train_logdir=/home/rahbinsanat/amir/tensorflow-github/research/deeplab/datasets/cityscapes/exp/train_on_train_set/train \
    --dataset_dir=/home/rahbinsanat/amir/tensorflow-github/research/deeplab/datasets/cityscapes/tfrecord

Here you can see the error while starting queues:

I0905 11:47:48.102874 140652054267712 learning.py:768] Starting Queues.
Fatal Python error: Segmentation fault

Thread 0x00007fea357fa700 (most recent call first):
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443 in _call_tf_sessionrun
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350 in _run_fn
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365 in _do_call
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359 in _do_run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180 in _run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/training_util.py", line 68 in global_step
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 1149 in save
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 1119 in run_loop
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/coordinator.py", line 495 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 890 in _bootstrap

Thread 0x00007fea35ffb700 (most recent call first):
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443 in _call_tf_sessionrun
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350 in _run_fn
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365 in _do_call
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359 in _do_run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180 in _run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/training_util.py", line 68 in global_step
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 1081 in run_loop
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/coordinator.py", line 495 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 890 in _bootstrap

Thread 0x00007fea367fc700 (most recent call first):
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443 in _call_tf_sessionrun
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350 in _run_fn
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365 in _do_call
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359 in _do_run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180 in _run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 1045 in run_loop
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/coordinator.py", line 495 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 890 in _bootstrap

Thread 0x00007febde7fc700 (most recent call first):
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 296 in wait
  File "/home/rahbinsanat/anaconda3/lib/python3.7/queue.py", line 170 in get
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/home/rahbinsanat/anaconda3/lib/python3.7/threading.py", line 890 in _bootstrap

Thread 0x00007fec1bbb4740 (most recent call first):
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443 in _call_tf_sessionrun
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350 in _run_fn
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365 in _do_call
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359 in _do_run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180 in _run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/contrib/slim/python/slim/learning.py", line 490 in train_step
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/contrib/slim/python/slim/learning.py", line 775 in train
  File "train.py", line 456 in main
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299 in run
  File "/home/rahbinsanat/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40 in run
  File "train.py", line 462 in <module>
Segmentation fault (core dumped)```

Model resulted in Nan value during calibration.

Code (just a demo how I do quantize and it can't reproduce error)

def representative_dataset_gen():
    for x in validation_fingerprints:
      x = x[np.newaxis,:]
      yield [x]

converter = tf.lite.TFLiteConverter.from_saved_model(flags.train_dir + '/last_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.allow_custom_ops = True
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
last_quant_model = converter.convert()
with open(flags.train_dir + '/quant_last_model.tflite', 'wb') as w:
  w.write(last_quant_model)

Some config
type(validation_fingerprints): <class 'numpy.ndarray'>
shape(validation_fingerprints): (3093, 16384)
type(x): <class 'numpy.ndarray'>
shape(x): (1,16384)
The model_summary
model_summary.txt

Validation_fingerprints is np.float32. I don't know if it would cause problem in full integer quantization. (I found that 4-2-6-7. Full Integer Quantization from saved_model (All 8-bit integer quantization) also use np.float32 tho.)
I've also found this issue but setting fused=False in batch norm doesn't help. Is there any advice? Stuck this for several days ๐Ÿ˜ข

Yolov4-Tiny EdgeTPU Missing Leaky Relu

Hello,
How were you able to execute a Yolo on the Coral with Leaky Relu? We are using tfLite interpreter with "libedgetpu.so.1.0" and we are getting errors.
self.interpreter = tf.lite.Interpreter( hw_model_path, experimental_delegates=[load_delegate('libedgetpu.so.1.0')] )
main.ERROR - Only float32 and uint8 is supported currently, got INT8.Node number 404 (LEAKY_RELU) failed to invoke.

Model miss

Google has released a Hair Segmentation model, but you don't seem to have this project at present.

yolov4-tiny with RaspberryPi

Hello,
It seems you have great performance with yolo tiny with an RPI :
RaspberryPi4 + CPU only + INT8 + Tensorflow Lite (4 threads) + 416x416 with 243ms/inference Performance.

I assume you used the following command :

bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- \
  --graph=${HOME}/Downloads/yolov4-tiny.tflite \
  --num_threads=4 \
  --warmup_runs=1 \
  --enable_op_profiling=true

However, how can I get similar results for but with my own images and classes? Do you have a repo for performing object detection optimised for RPI (aarch64/armhf)?

Also, I saw you have special wheels for TensorFlow in aarch64 (https://github.com/PINTO0309/Tensorflow-bin), but do you have the same for other packages like opencv in aarch64?

model's output tensor size is not 4

engine = DetectionEngine("yolov4_416_full_integer_quant_edgetpu.tflite")

ValueError: Dectection model should have 4 output tensors!This model has 3.

018_EfficientDet Model outputs?

Hi, I am trying to use your efficientDet model, but I am clueless as to what the output is supposed to be? afaik the output ia (1,100,7) I assume its 100 detections, and 7 something?
Here is what I am seeing from output:

output_data = self.model.get_tensor(self.output_details[0]['index'])
output_data.shape: (1, 100, 7)
[[[ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]
  [ 0.        0.        0.       20.931305 20.931305  0.       62.793915]]]

Can you explain what I am looking at?

What dataset was used in training the 026_mobile-deeplabv3-plus OpenVINO models?

I am assuming that these are the same as the ones provided by TensorFlow at tf/models/research/deeplab.

If so, the models should output segmentations for 21 classes. However, the provided optimized models predict seg_maps containing only zeros and ones.

Can you please explain the difference between your output and the original model on tf? Is this because you retrained the model on a person class only? If so, what was the dataset used in training?

Thank you.

Mediapipe facemesh tfjs

Any plans for optimization for mobile browsers. Or tfjs model provided in the repo will work.

schema.fbs error: illegal character: <

I am trying to convert blazeface_front.tflite to .pb and getting stack at there

output json command = flatc -t --strict-json --defaults-json -o . schema.fbs -- face_detection_front.tflite error: /home/duc/Desktop/FServing/flatbuffers/schema.fbs:6: 1: error: illegal character: < Traceback (most recent call last): File "/home/duc/Desktop/FServing/flatbuffers/blazeface_tflite_to_pb.py", line 210, in <module> main() File "/home/duc/Desktop/FServing/flatbuffers/blazeface_tflite_to_pb.py", line 168, in main ops, op_types = parse_json() File "/home/duc/Desktop/FServing/flatbuffers/blazeface_tflite_to_pb.py", line 30, in parse_json j = json.load(open(model_json_path)) FileNotFoundError: [Errno 2] No such file or directory: 'face_detection_front.json'

How to use representative_dataset_gen()

I used tf 2.3 and met segmentation fault (core dumped) error when using representative_dataset_gen().
Code

    def representative_dataset_gen():
      for audio in validation_fingerprints:
        yield [audio]
    converter = tf.lite.TFLiteConverter.from_saved_model(flags.train_dir + '/last_model')
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
    converter.allow_custom_ops = True
    converter.inference_input_type = tf.uint8
    converter.inference_output_type = tf.uint8
    # converter.representative_dataset = representative_dataset_gen
    quant_model = converter.convert()
    with open(flags.train_dir + '/quant_last_model.tflite', 'wb') as w:
      w.write(quant_model)

Above code can run without error but if I use converter.representative_dataset = representative_dataset_gen, it fail.

The type and shape of data are at below. The input layer size is [Batch, 16384]

type(validation_fingerprints): <class 'numpy.ndarray'>
shape(validation_fingerprints): (3093, 16384)

Any suggestion?

BlazePose Output

Hi! Thank you for your great work!
I'm confused about BlazePose Output where (156,) numbers like
[ 2.52126999e+02, 6.02675476e+01, 0.00000000e+00, 7.85891479e+02, 1.81462738e+02, 1.63871140e+02, 0.00000000e+00, 7.20563904e+02, 1.45734039e+02, 1.29187439e+02, 0.00000000e+00, 6.98395630e+02, 2.13491409e+02, 1.06495354e+02, 0.00000000e+00, 6.22234802e+02, 2.00276703e+02, -8.05518913e+00, 0.00000000e+00, 8.24156921e+02, 2.26737579e+02, -6.08142509e+01, 0.00000000e+00, 8.02759888e+02, 3.44873230e+02, 1.73129330e+01, 0.00000000e+00, 7.70141296e+02, 1.80556412e+02, 2.63250000e+02, 0.00000000e+00, 5.80166565e+02, 3.85322113e+02, -1.70522858e+02, 0.00000000e+00, 7.14230835e+02, 6.90424728e+01, 1.44556747e+02, 0.00000000e+00, 6.49112549e+02, 1.49639694e+02, 1.06226685e+02, 0.00000000e+00, 7.19676697e+02, 5.28476562e+02, 3.08210297e+02, -1.69180872e+03, 5.81274231e+02, 1.55742020e+02, 3.07312897e+02, -1.85214661e+03, 5.41536011e+02, 9.28919907e+01, 8.45592224e+02, 1.27794141e+03, 2.13438950e+02, 1.48164871e+02, 5.72311401e+02, 1.61038025e+03, 3.95598450e+02, 3.03308502e+02, 3.77423492e+02, -7.68191895e+02, 2.65322083e+02, 4.57906586e+02, 2.28914776e+01, -1.81853455e+03, 4.70178528e+02, 6.31962402e+02, 2.88600311e+02, 0.00000000e+00, 2.67264465e+02, 5.21761108e+02, -1.34646225e+02, 0.00000000e+00, 4.36134796e+02, 5.85655136e+01, -6.74091949e+01, 0.00000000e+00, 2.82592468e+02, 6.13506226e+02, -1.69087173e+02, 0.00000000e+00, 5.03378204e+02, -7.62357254e+01, -9.83965836e+01, 0.00000000e+00, 3.04275391e+02, 6.24550049e+02, -3.54933510e+01, 0.00000000e+00, 4.93076843e+02, 5.50927551e+02, 3.69620392e+02, 7.18008652e+01, 4.75950956e+00, 2.44607971e+02, 4.69858917e+02, -8.90234451e+01, -1.11216354e+02, 2.67619415e+02, 5.19302246e+02, 2.35148270e+02, 2.89389679e+02, 3.65628471e+01, 3.67190094e+02, 4.64473145e+02, 8.65795364e+01, -1.73825119e+02, 6.05172424e+02, 1.30989319e+03, 2.43793793e+02, 1.52740601e+02, 8.13551758e+02, 1.99037500e+03, 1.52300385e+02, -1.58977951e+02, 6.85409485e+02, 0.00000000e+00, 2.29356506e+02, 5.41917419e+02, 8.15468506e+02, 0.00000000e+00, 1.49431351e+02, 1.45869461e+02, 8.23752747e+02, 0.00000000e+00, 2.91660767e+02, 3.70319244e+02, 1.30325195e+03, 0.00000000e+00, 2.07150436e+02, 4.17732330e+02, 4.39149231e+02, 1.05031455e+00, 1.83442783e+01, 8.89772263e+01, -2.31506516e+02, -7.93924236e+00, 2.01322327e+01, 3.84161530e+02, -4.48814926e+01, 9.11639392e-01, 2.05853485e+02, 2.75295074e+02, 2.07653046e+02, -2.28759232e+01, 2.15975464e+02, 5.05869934e+02, -4.85139046e+01, -1.24414492e+01, 5.11680725e+02, 3.77195312e+02, 9.83366470e+01, -2.68878841e+01, 5.35371887e+02]
I understand that in some way they describe each of 39kp, but they don't have normalization and I don't understant their meaning. Would you please help with it? How convert they in usual coordinates? Or what do they mean? Thanks!

Is there an easy way to convert ONNX or PB from (NCHW) to (NHWC)?

@PINTO0309 Hi,
Nice work with YOLOv4 / tiny!

As I see you use:

  • NCHW for: OpenVINO (xml / bin), Darknet (cfg / weights)

  • NHWC for: TFLite, Keras (yolov4_tiny_voc.json / yolov4_tiny_voc.h5), TF1 (pb), TF2 (saved_models.json / saved_models.pb)

I have several questions:

  • Is there an easy way to convert ONNX or PB from (NCHW) to (NHWC)?
    I've seen converters that add transpose before and after each layer, but this seems to slow things down a lot. Is it possible to do this transformation without slowing down the inference?

  • Is there an easy way to convert TF1-pb to TF2-saved_models.pb ?

  • Is NHWC slowing down execution on the GPU?

  • How many FPS do you get on Google Coral TPU-Edge and RaspberryPi4 for yolov4-tiny (int8)?

  • What script did you use to get yolov4_tiny_voc.json ?

ImportError: No Module named tensorflow.lite.python.interpreter

I've tried to replicate one of the examples (specifically example 1) with a different OS in a raspberry pi 4B model.

Knowing that this is using Ubuntu instead of a Raspbian Buster, am I to presume this is an OS problem? or does the problem lie in the tensorflow/tensorflow-lite version i am using?

Many thanks in advance.

To the owner of the repo, you are amazing for creating this work

Here are the details:

$ sudo python mobilenetv2ssdlite_movie_sync.py
Traceback (most recent call last):
  File "mobilenetv2ssdlite_movie_sync.py", line 10, in <module>
    from tensorflow.lite.python.interpreter import Interpreter
ImportError: No module named tensorflow.lite.python.interpreter

--------------------------- PYTHON IMPORT SUCCESSFULLY BUT NOT INSCRIPT --------------------------- 

(2.2.0_Tensor) ubuntu@ubuntu:~/Repositories/PINTO_model_zoo/006_mobilenetv2-ssdlite/02_voc$ python
Python 3.7.5 (default, Apr 19 2020, 20:18:17)
[GCC 9.2.1 20191008] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> from tensorflow.lite.python.interpreter import Interpreter
>>> from tflite_runtime.interpreter import Interpreter

--------------------------- SYSTEM ENVIRONMENT INFO --------------------------- 
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="19.10 (Eoan Ermine)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.10"
VERSION_ID="19.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=eoan
UBUNTU_CODENAME=eoan```


$ pip list
Package                Version
---------------------- ---------
absl-py                0.9.0
appdirs                1.4.4
astunparse             1.6.3
cachetools             4.1.1
certifi                2020.6.20
chardet                3.0.4
Cython                 0.29.21
distlib                0.3.1
easydict               1.9
filelock               3.0.12
gast                   0.3.3
google-auth            1.20.0
google-auth-oauthlib   0.4.1
google-pasta           0.2.0
grpcio                 1.30.0
h5py                   2.10.0
idna                   2.10
importlib-metadata     1.7.0
Keras-Preprocessing    1.1.2
Markdown               3.2.2
numpy                  1.19.1
oauthlib               3.1.0
opt-einsum             3.3.0
pbr                    5.4.5
Pillow                 7.2.0
pip                    20.2
protobuf               3.12.4
pyasn1                 0.4.8
pyasn1-modules         0.2.8
requests               2.24.0
requests-oauthlib      1.3.0
rsa                    4.6
scipy                  1.4.1
setuptools             49.2.0
six                    1.15.0
stevedore              3.2.0
tensorboard            2.2.2
tensorboard-plugin-wit 1.7.0
tensorflow             2.2.0
tensorflow-estimator   2.2.0
termcolor              1.1.0
tflite-runtime         2.2.0
urllib3                1.25.10
virtualenv             20.0.28
virtualenv-clone       0.5.4
virtualenvwrapper      4.8.4
Werkzeug               1.0.1
wheel                  0.34.2
wrapt                  1.12.1
zipp                   3.1.0```

Mask-RCNN unresolved custom op

Hey, great project!

I tried to run the Mask-RCNN tflite model and I get the following error:

RuntimeError: Encountered unresolved custom op: NonMaxSuppressionV5.Node number 109 (NonMaxSuppressionV5) failed to prepare.

Do you have any plans to create this operation for tflite?

Converted palm detection model

Hi, thanks for sharing all these models with us. In the folder "033_Hand_Detection_and_Tracking" there are scripts to convert the palm_detection.tflite model to a float 32 .pb model but it is not possible to download it with the download script provided, since it just downloads the hand_landmark model. Could you please make the converted palm detection model available?

For https://github.com/PINTO0309/PINTO_model_zoo/tree/master/036_Objectron/08_coreml when I use it in ios project I get this error

For https://github.com/PINTO0309/PINTO_model_zoo/tree/master/036_Objectron/08_coreml/sneakers.ml model an error came up when I tried to use it in ios project using this code https://github.com/makeml-app/Live-Object-Recognition-CoreML saying "The model does not have a valid input feature of type image" UserInfo={NSLocalizedDescription=The model does not have a valid input feature of type image}

In this model can you rename "input" in inputs column to "image__0" and give me updated model. It will be a great help. Thanks in advance."

mobilenetv3-ssd quantization

can you provide the float32 version of tflite file?
I found tflite_graph.pb and tflite_graph.pbtxt files, but I failed to convert it into tflite float32 or int8.
Or you can give me some advice or code how to do that?
Thanks !

Posenet Edge TPU

Hi,

I'm a little bit beginner and I'm looking for official posenet model (257) based on mobilenet for edgetpu (google dev board). Do you have such model please because I didn't find or I missed it in your repositories ? :)

Thanks for help

posenet versions and resnet.

Hi, first of all, thanks for this great repository of tensor flow models!, I am learning Tensor Flow and it's very useful!.

I am trying to use the models from Posenet, and the results I am getting don't look very good compared to what we can see in the online posenet demo.

Also, in the TensorFlow.JS repository, they say they're using the new PoseNet 2.0, which is only available for TensorFlow.JS.... and it comes in two modes: MobileNet and ResNet.

My questions are:

The Posenet models available in your repository, are based on the old Posenet models? or are they based on the new Posenet 2.0 advertised by tensorflow.js?

Could it be possible for you to include the new Posenet ResNet models in your repository?

Thanks in advance!

Running inference using

Hi, I tried to run inference using a few edge TPU models from your model zoo, and faced the following errors. I was wondering if you could share the code script you used to run inference on mobile devices. Otherwise, It would be appreciated if you could help me solve these errors. Thank you so much.

Running on Odroid with coral usb acceleartor, ubuntu 18.01
Trying to detect person

Error1: `ssd_mobilenet_v2_mnasfpn_shared_box_predictor_320_coco_full_integer_quant_edgetpu.tflite``

Traceback (most recent call last):
  File "one_ch_pd_app.py", line 62, in <module>
    init_inference()
  File "/root/Detection/person_detection.py", line 115, in init_inference
    interpreter = common.make_interpreter(input_model)
  File "/root/Detection/common.py", line 28, in make_interpreter
    {'device': device[0]} if device else {})
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 204, in __init__
    model_path, self._custom_op_registerers))
ValueError: Found too many dimensions in the input array of operation 'reshape'.

Error2: yolov4_416_full_integer_quant_edgetpu.tflite

Traceback (most recent call last):
  File "one_ch_pd_app.py", line 77, in <module>
    objs = run_inference(THRESHOLD, 10, frame) # threshold, top_k, frame
  File "/root/Detection/person_detection.py", line 126, in run_inference
    interpreter.invoke()
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 506, in invoke
    self._interpreter.Invoke()
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter_wrapper.py", line 113, in Invoke
    return _interpreter_wrapper.InterpreterWrapper_Invoke(self)
RuntimeError: Only float32 and uint8 is supported currently, got INT8.Node number 404 (LEAKY_RELU) failed to invoke.

Error3: ssd_mobilenet_v2_oid_v4_300x300_full_integer_quant_edgetpu.tflite

I could run the inference, but the bounding box was not shown on the screen. I think it could be because the id (class label) for person is different in "open image dataset" but not sure what is the class label for person here

Error4: ssdlite_mobilenet_v2_coco_300_full_integer_quant_edgetpu.tflite

Traceback (most recent call last):
  File "one_ch_pd_app.py", line 77, in <module>
    objs = run_inference(THRESHOLD, 10, frame) # threshold, top_k, frame
  File "/root/Detection/person_detection.py", line 127, in run_inference
    objs = get_output(score_threshold=threshold, top_k=top_k)
  File "/root/Detection/person_detection.py", line 83, in get_output
    scores = common.output_tensor(interpreter, 2)
  File "/root/Detection/common.py", line 48, in output_tensor
    output_details = interpreter.get_output_details()[i]
IndexError: list index out of range

repository structure

Hi again!

I've kept playing with all the amazing models you're collecting in the zoo repository, but lately, as the repository keeps growing, it's becoming more difficult to find models.

I would like to suggest you to reorganize the repository so it's arranged as themes

So far I would suggest these root repositories:

  • Artistic
  • Face Detection
  • Hand Detection
  • Body Pose Detection
  • Object Detection
  • Depth from monocular images
  • Misc

That way it would be easier to find and compare different models...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.