Coder Social home page Coder Social logo

Comments (4)

tucan9389 avatar tucan9389 commented on July 19, 2024

Related issues:

Related links:

Sample code creating a concrete function

model = tf.keras.Sequential([tf.keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x=[-1, 0, 1, 2, 3, 4], y=[-3, -1, 1, 3, 5, 7], epochs=50)

# Get the concrete function from the Keras model.
run_model = tf.function(lambda x : model(x))

# Save the concrete function.
concrete_func = run_model.get_concrete_function(
    tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

from tf2-mobile-2d-single-pose-estimation.

tucan9389 avatar tucan9389 commented on July 19, 2024

Related issue:

Converting source code

# Copyright 2019 Doyoung Gwak ([email protected])
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ======================
#-*- coding: utf-8 -*-

import os.path
from path_manager import PROJ_HOME
from hourglass_model import HourglassModelBuilder
import tensorflow as tf

def convert_model(model, model_file_path):
    print('converting...')

    # file path
    file_name = os.path.splitext(os.path.basename(model_file_path))[0]
    tflite_model_path = os.path.join(model_path, "tflite")
    tflite_model_file_path = os.path.join(tflite_model_path, file_name + '.tflite')

    # Get the concrete function from the Keras model.
    run_model = tf.function(lambda x: model(x))

    # Save the concrete function.
    concrete_func = run_model.get_concrete_function(
        tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

    converter = tf.lite.TFLiteConverter.from_concrete_function(concrete_func)
    # converter.post_training_quantize = True
    tflite_model = converter.convert()
    file = open(tflite_model_file_path, 'wb')
    file.write(tflite_model)


output_path = os.path.join(PROJ_HOME, "outputs")
model_path = os.path.join(output_path, "models")
model_file_path = os.path.join(model_path, "hg_1e9_20190403204228.hdf5")

print(model_path)

if os.path.isfile(model_file_path):
    print(model_file_path)

    model_builder = HourglassModelBuilder()
    model_builder.build_model()

    model = model_builder.model
    model.load_weights(model_file_path)
    convert_model(model, model_file_path)

else:
    print('no model found')

log

converting...
2019-04-05 12:58:32.224673: I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-04-05 12:58:32.224733: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-04-05 12:58:32.281175: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:666] Optimization results for grappler item: graph_to_optimize
2019-04-05 12:58:32.281188: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:668]   function_optimizer: Graph size after: 2498 nodes (0), 3128 edges (0), time = 3.652ms.
2019-04-05 12:58:32.281192: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:668]   function_optimizer: Graph size after: 2498 nodes (0), 3128 edges (0), time = 5.165ms.
Traceback (most recent call last):
  File "/Users/doyounggwak/Project/MoTLabs/rehapp/ML/tf2-mobile-pose-estimation/convert_to_tflite.py", line 86, in <module>
    convert_model(model, model_file_path)
  File "/Users/doyounggwak/Project/MoTLabs/rehapp/ML/tf2-mobile-pose-estimation/convert_to_tflite.py", line 57, in convert_model
    tflite_model = converter.convert()
  File "/Users/doyounggwak/anaconda3/envs/pefm-env-tf2-alpha0/lib/python3.5/site-packages/tensorflow/lite/python/lite.py", line 246, in convert
    self._func)
  File "/Users/doyounggwak/anaconda3/envs/pefm-env-tf2-alpha0/lib/python3.5/site-packages/tensorflow/python/framework/convert_to_constants.py", line 173, in convert_variables_to_constants_v2
    "data": tensor_data[input_name],
KeyError: 'model_1/batch_normalization_v2_105/FusedBatchNorm/ReadVariableOp/resource'

Process finished with exit code 1

from tf2-mobile-2d-single-pose-estimation.

tucan9389 avatar tucan9389 commented on July 19, 2024

It seems ML Kit does not support the current version

Version

  • iOS 12.2
  • Firebase:
Using Firebase (5.18.0)
Using FirebaseAnalytics (5.7.0)
Using FirebaseCore (5.3.1)
Using FirebaseInstanceID (3.7.0)
Using FirebaseMLCommon (0.14.0)
Using FirebaseMLModelInterpreter (0.14.0)
Using FirebaseMLVision (0.14.0)
Using GTMSessionFetcher (1.2.1)
Using GoogleAPIClientForREST (1.3.8)
Using GoogleAppMeasurement (5.7.0)
Using GoogleMobileVision (1.5.0)
Using GoogleToolboxForMac (2.2.0)
Using GoogleUtilities (5.3.7)
Using Protobuf (3.7.0)
Using TensorFlowLite (1.12.0)
Using nanopb (0.3.901)

Log on iOS app

2019-04-07 13:16:33.196621+0900 PoseEstimation-MLKit[12951:3114553] 5.18.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Op builtin_code out of range: 97. Are you using old TFLite binary with newer model?
2019-04-07 13:16:33.196743+0900 PoseEstimation-MLKit[12951:3114553] 5.18.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Registration failed.
Inference error: 
Failed to create a TFLite interpreter for the given model (/var/containers/Bundle/Application/F0C8E488-3AD8-4A38-96FA-AD0E9D4649DD/PoseEstimation-MLKit.app/hg_1e8_nightly_20190405133613.tflite).
2019-04-07 13:16:33.312451+0900 PoseEstimation-MLKit[12951:3114494] 5.18.0 - [Firebase/Analytics][I-ACS023027] Do not schedule an upload task. Task already exists. Will be executed in seconds: 0.270807147026062

from tf2-mobile-2d-single-pose-estimation.

tucan9389 avatar tucan9389 commented on July 19, 2024

Error

2020-03-09 00:49:49.429871: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-03-09 00:49:49.440184: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fdf4d648800 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-03-09 00:49:49.440194: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-03-09 00:50:10.861542: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-03-09 00:50:10.861616: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-03-09 00:50:11.114383: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:814] Optimization results for grappler item: graph_to_optimize
2020-03-09 00:50:11.114401: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   function_optimizer: Graph size after: 3826 nodes (3021), 7385 edges (6580), time = 161.328ms.
2020-03-09 00:50:11.114406: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   function_optimizer: function_optimizer did nothing. time = 6.708ms.
2020-03-09 00:50:18.457048: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-03-09 00:50:18.457136: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-03-09 00:50:25.754670: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:814] Optimization results for grappler item: graph_to_optimize
2020-03-09 00:50:25.754690: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 2372 nodes (-802), 4960 edges (-1604), time = 4832.47705ms.
2020-03-09 00:50:25.754694: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 2372 nodes (0), 4960 edges (0), time = 1420.66504ms.
Traceback (most recent call last):
  File "/Users/doyounggwak/Project/ml-project/github/tf2-mobile-pose-estimation/convert_to_tflite.py", line 32, in <module>
    tflite_model = converter.convert()
  File "/Users/doyounggwak/anaconda3/envs/pose_tf2_env/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 464, in convert
    **converter_kwargs)
  File "/Users/doyounggwak/anaconda3/envs/pose_tf2_env/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 457, in toco_convert_impl
    enable_mlir_converter=enable_mlir_converter)
  File "/Users/doyounggwak/anaconda3/envs/pose_tf2_env/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 203, in toco_convert_protos
    raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: See console for info.
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 968, in _find_and_load
SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
ImportError: numpy.core._multiarray_umath failed to import
ImportError: numpy.core.umath failed to import
2020-03-09 00:50:29.008556: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr 

from tf2-mobile-2d-single-pose-estimation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.