Coder Social home page Coder Social logo

gmalivenko / onnx2keras Goto Github PK

View Code? Open in Web Editor NEW
193.0 8.0 115.0 179 KB

Convert ONNX model graph to Keras model format.

License: MIT License

Python 100.00%
onnx onnx2keras keras deep-learning deep-convolutional-networks tensorflow tensorflow-models

onnx2keras's Introduction

onnx2keras

ONNX to Keras deep neural network converter.

GitHub License Python Version Downloads PyPI

Requirements

TensorFlow 2.0

API

onnx_to_keras(onnx_model, input_names, input_shapes=None, name_policy=None, verbose=True, change_ordering=False) -> {Keras model}

onnx_model: ONNX model to convert

input_names: list with graph input names

input_shapes: override input shapes (experimental)

name_policy: ['renumerate', 'short', 'default'] override layer names (experimental)

verbose: detailed output

change_ordering: change ordering to HWC (experimental)

Getting started

ONNX model

import onnx
from onnx2keras import onnx_to_keras

# Load ONNX model
onnx_model = onnx.load('resnet18.onnx')

# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(onnx_model, ['input'])

Keras model will be stored to the k_model variable. So simple, isn't it?

PyTorch model

Using ONNX as intermediate format, you can convert PyTorch model as well.

import numpy as np
import torch
from torch.autograd import Variable
from pytorch2keras.converter import pytorch_to_keras
import torchvision.models as models

if __name__ == '__main__':
    input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
    input_var = Variable(torch.FloatTensor(input_np))
    model = models.resnet18()
    model.eval()
    k_model = \
        pytorch_to_keras(model, input_var, [(3, 224, 224,)], verbose=True, change_ordering=True)

    for i in range(3):
        input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
        input_var = Variable(torch.FloatTensor(input_np))
        output = model(input_var)
        pytorch_output = output.data.numpy()
        keras_output = k_model.predict(np.transpose(input_np, [0, 2, 3, 1]))
        error = np.max(pytorch_output - keras_output)
        print('error -- ', error)  # Around zero :)

Deplying model as frozen graph

You can try using the snippet below to convert your onnx / PyTorch model to frozen graph. It may be useful for deploy for Tensorflow.js / for Tensorflow for Android / for Tensorflow C-API.

import numpy as np
import torch
from pytorch2keras.converter import pytorch_to_keras
from torch.autograd import Variable
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2


# Create and load model
model = Model()
model.load_state_dict(torch.load('model-checkpoint.pth'))
model.eval()

# Make dummy variables (and checking if the model works)
input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
input_var = Variable(torch.FloatTensor(input_np))
output = model(input_var)

# Convert the model!
k_model = \
    pytorch_to_keras(model, input_var, (3, 224, 224), 
                     verbose=True, name_policy='short',
                     change_ordering=True)

# Save model to SavedModel format
tf.saved_model.save(k_model, "./models")

# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: k_model(x))
full_model = full_model.get_concrete_function(
    tf.TensorSpec(k_model.inputs[0].shape, k_model.inputs[0].dtype))

# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()

print("-" * 50)
print("Frozen model layers: ")
for layer in [op.name for op in frozen_func.graph.get_operations()]:
    print(layer)

print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)

# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                  logdir="./frozen_models",
                  name="frozen_graph.pb",
                  as_text=False)

License

This software is covered by MIT License.

onnx2keras's People

Contributors

andredance avatar gmalivenko avatar hzhexuan avatar jfeil avatar jiayiliu avatar kaimingkuang avatar mrharicot avatar nibeh avatar ninfueng avatar qureshizawar avatar sleepprogger avatar stekaiser avatar yuyakobayashi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx2keras's Issues

KeyError: 'Cast'

Can you help increase the Cast operator?
or
Is there any way to increase the custome layer?

Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.3\helpers\pydev\pydevd.py", line 1758, in <module>
    main()
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.3\helpers\pydev\pydevd.py", line 1752, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.3\helpers\pydev\pydevd.py", line 1147, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "E:/WS_Nick/WS_Python/CRAFT/CRAFT-pytorch_nick/20190822_01pytorch2tflite.py", line 70, in <module>
    pytorch2savedmodel(onnx_model_path, keras_model_path,'input.1')
  File "E:\WS_Nick\WS_Python\CRAFT\CRAFT-pytorch_nick\converters.py", line 14, in pytorch2savedmodel
    change_ordering=True, verbose=False)
  File "C:\Users\ai\Desktop\OpenR8-19.11-3\Tools\Python\Python_3.6_GPU\Anaconda3\lib\site-packages\onnx2keras\converter.py", line 145, in onnx_to_keras
    AVAILABLE_CONVERTERS[node_type](
KeyError: 'Cast'

onnx2keras

import onnx
from onnx2keras import onnx_to_keras

Load ONNX model

onnx_model = onnx.load('resnet18.onnx')

Call the converter (input - is the main model input name, can be different for your model)

k_model = onnx_to_keras(onnx_model, ['input'])

In the above code could you please any one specify ['input'] stands for...
or from where we can get the 'input' parameter from our onnx model.

Thank you

KeyError: 'group'

I tried to convert the LaneNet-Lane-Detection model (https://github.com/MaybeShewill-CV/lanenet-lane-detection) from TensorFlow to ONNX and then to Keras.

But I received the following error:

  File ".../PycharmProjects/ONNX/lanenet/onnx2Keras.py", line 6, in <module>
    k_model = onnx_to_keras(onnx_model, ['lanenet/input_tensor'],name_policy='renumerate')
  File "...\anaconda3\lib\site-packages\onnx2keras\converter.py", line 174, in onnx_to_keras
    keras_names
  File "...\tools\anaconda3\lib\site-packages\onnx2keras\convolution_layers.py", line 212, in convert_convtranspose
    if params['group'] > 1:
KeyError: 'group'

Other details:
I printed the value of params

{'dilations': [1, 1], 'strides': [2, 2], 'kernel_shape': [4, 4], 'output_shape': [32, 64], 'change_ordering': False, 'name_policy': 'renumerate'}

In params I don't have 'group' , 'pads'

You can access the ONNX model here: https://drive.google.com/file/d/1RvCjc05T5kRoy6VOBS4o9vZV6hQkOOjl/view?usp=sharing

I commented the line 213-214 to pass the if statement if params['group'] > 1: but the next error received is:

Traceback (most recent call last):
  File ".../ONNX/lanenet/onnx2Keras.py", line 6, in <module>
    k_model = onnx_to_keras(onnx_model, ['lanenet/input_tensor'],name_policy='renumerate')
  File "...\anaconda3\lib\site-packages\onnx2keras\converter.py", line 174, in onnx_to_keras
    keras_names
  File "...\tools\anaconda3\lib\site-packages\onnx2keras\convolution_layers.py", line 237, in convert_convtranspose
    input_0.set_shape(input_0._keras_shape)
AttributeError: 'Tensor' object has no attribute '_keras_shape'

Problems with importing model to keras: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

Hello, I'm trying to convert my pytorch model to keras and I have ready onnx file for It. When I started to converting onnx to keras, I've got next error:

DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 645).
DEBUG:onnx2keras:Check input 1 (name 646).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 648
DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 647).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 649
DEBUG:onnx2keras:node_params: {'to': 11, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 648).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Constant
DEBUG:onnx2keras:node_name: 650
DEBUG:onnx2keras:node_params: {'value': array(1.), 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Div
DEBUG:onnx2keras:node_name: 651
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 650).
DEBUG:onnx2keras:Check input 1 (name 649).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:div:Convert inputs to Keras/TF layers if needed.
WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Constant
DEBUG:onnx2keras:node_name: 652
DEBUG:onnx2keras:node_params: {'value': array(224.), 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Mul
DEBUG:onnx2keras:node_name: 653
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 651).
DEBUG:onnx2keras:Check input 1 (name 652).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 654
DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 653).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Mul
DEBUG:onnx2keras:node_name: 655
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 654).
DEBUG:onnx2keras:Check input 1 (name 654).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Unsqueeze
DEBUG:onnx2keras:node_name: 657
DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 639).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:unsqueeze:Work with numpy types.
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Unsqueeze
DEBUG:onnx2keras:node_name: 659
DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 655).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Concat
DEBUG:onnx2keras:node_name: 660
DEBUG:onnx2keras:node_params: {'axis': 0, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 657).
DEBUG:onnx2keras:Check input 1 (name 1057).
DEBUG:onnx2keras:The input not found in layers / model inputs.
DEBUG:onnx2keras:Found in weights, add as a numpy constant.
DEBUG:onnx2keras:Check input 2 (name 659).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:concat:Concat Keras layers.
WARNING:onnx2keras:concat:!!! IMPORTANT INFORMATION !!!
WARNING:onnx2keras:concat:Something goes wrong with concat layers. Will use TF fallback.
WARNING:onnx2keras:concat:---
Traceback (most recent call last):
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 110, in convert_concat
    name=keras_name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\layers\merge.py", line 705, in concatenate
    return Concatenate(axis=axis, **kwargs)(inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
    inputs = nest.map_structure(_convert_non_tensor, inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
    return ops.convert_to_tensor(x)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
    return constant_op.constant(value, dtype, name=name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/1/PycharmProjects/untitled/onnx_.py", line 6, in <module>
    k_model = onnx2keras.onnx_to_keras(model, ["input_data"])
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\converter.py", line 177, in onnx_to_keras
    keras_names
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 122, in convert_concat
    layers[node_name] = lambda_layer(layer_input)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
    inputs = nest.map_structure(_convert_non_tensor, inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
    return ops.convert_to_tensor(x)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
    return constant_op.constant(value, dtype, name=name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

There is my code:

import onnx
import onnx2keras

model = onnx.load_model("model.onnx")

k_model = onnx2keras.onnx_to_keras(model, ["input_data"])

And here is my onnx file to reproduce my errors:
https://drive.google.com/file/d/1NjYMbidm5WOB4SeeBw22dGzn-jZIrU2-/view?usp=sharing

Python: 3.7.4
keras: 2.3.1
onnx: 1.6.0
onnx2keras: 0.0.18

Thank you.

AttributeError: Number of inputs is not equal 1 for slice layer

Hi @nerox8664!

I tried to convert an ONNX model to Keras, but I received the following error

Traceback (most recent call last):
  File "onnx2Keras.py", line 6, in <module>
    k_model = onnx_to_keras(onnx_model, ['input/input_data'],name_policy='renumerate')
  File "...\onnx2keras-env\lib\site-packages\onnx2keras\converter.py", line 174, in onnx_to_keras
    keras_names
  File "...\onnx2keras-env\lib\site-packages\onnx2keras\reshape_layers.py", line 264, in convert_slice
    raise AttributeError('Number of inputs is not equal 1 for slice layer')
AttributeError: Number of inputs is not equal 1 for slice layer

I converted this model https://github.com/YunYang1994/tensorflow-yolov3 from Tensorflow to ONNX and now I need the model in Keras.

You can access the ONNX model here: https://drive.google.com/file/d/1ad3EDCILJf1xq8z0J39ZC1LAMAzI9zxV/view?usp=sharing

Thanks!

onnx2keras

while executing
k_model = onnx_to_keras(onnx_model, ['input']) , we are getting following error


Traceback (most recent call last):
File "import_convt2keras.py", line 8, in
k_model = onnx_to_keras(onnx_model, ['input'])
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/onnx2keras/converter.py", line 80, in onnx_to_keras
weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'

IndexError: list index (1) out of range

Hello Grigory,

Thank you for the library, using this to convert a pytorch model to tfjs via onnx and using this library,

Here is a link to onnx file, https://drive.google.com/open?id=1BJM5R1uQ4fW3RwWikvjdmop-NEwMo9RF, exported using this
dummy_input = torch.randn(1,3, 224, 224, device='cuda')
input_names = [ "input" ]
output_names = [ "output" ]
torch.onnx.export(model, dummy_input, "model.onnx", verbose=True, input_names=input_names, output_names=output_names, export_params=True)

k_model = onnx_to_keras(onnx_model, ['input'])

File "/usr/local/lib/python3.6/dist-packages/onnx2keras/converter.py", line 174, in onnx_to_keras
keras_names
File "/usr/local/lib/python3.6/dist-packages/onnx2keras/operation_layers.py", line 233, in convert_cast
if is_numpy(layers[node.input[0]]) and is_numpy(layers[node.input[1]]):
IndexError: list index (1) out of range

Replace all lambda functions with custom keras layers

Using lambdas can lead to the model not being usable on other python versions / systems when saving and loading the result as a h5 file (see: keras-team/keras#9595 ).

I suggest replacing all current lambda layers with keras custom layers.
This would mean this project would have to also generate a python file with all custom layer implementations or just ship one with all custom layers (I'd prefer the first one).
It is possible to register custom layers with keras so shipping a function to register all used custom layers would make sense instead of having the user map all custom layers on keras.models.load_model time.

Otherwise users would have to ship the onnx file and run the conversion for each use which isn't really a nice way to go about it IMO.

`pooling_layers.py` has bugs regarding padding

I found that the added ZeroPadding2D layer is not correct when the paddings are not symmetric, as two of the four elements in pads of ONNX data are ignored. below is my tentative fix and it works in my case.

diff --git a/Users/yimengzh/Downloads/onnx2keras-0.0.18/onnx2keras/pooling_layers.py b/Users/yimengzh/Downloads/onnx2keras-0.0.18/onnx2keras/pooling_layers_new.py
index 6bb5118..1f14a48 100644
--- a/Users/yimengzh/Downloads/onnx2keras-0.0.18/onnx2keras/pooling_layers.py
+++ b/Users/yimengzh/Downloads/onnx2keras-0.0.18/onnx2keras/pooling_layers_new.py
@@ -21,12 +21,12 @@ def convert_maxpool(node, params, layers, node_name, keras_name):
     stride_height, stride_width = params['strides']
 
     pads = params['pads'] if 'pads' in params else [0, 0, 0, 0]
-    padding_h, padding_w, _, _ = pads
+    padding_h_t, padding_w_l, padding_h_b, padding_w_r = pads
 
     pad = 'valid'
 
     if height % 2 == 1 and width % 2 == 1 and \
-            height // 2 == padding_h and width // 2 == padding_w and \
+            height // 2 == padding_h_t == padding_h_b and width // 2 == padding_w_l == padding_w_r and \
             stride_height == 1 and stride_width == 1:
         pad = 'same'
         logger.debug('Use `same` padding parameters.')
@@ -34,7 +34,7 @@ def convert_maxpool(node, params, layers, node_name, keras_name):
         logger.warning('Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.')
         padding_name = keras_name + '_pad'
         padding_layer = keras.layers.ZeroPadding2D(
-            padding=(padding_h, padding_w),
+            padding=((padding_h_t, padding_h_b), (padding_w_l, padding_w_r)),
             name=padding_name
         )
         layers[padding_name] = input_0 = padding_layer(input_0)
@@ -68,22 +68,22 @@ def convert_avgpool(node, params, layers, node_name, keras_name):
     stride_height, stride_width = params['strides']
 
     pads = params['pads'] if 'pads' in params else [0, 0, 0, 0]
-    padding_h, padding_w, _, _ = pads
+    padding_h_t, padding_w_l, padding_h_b, padding_w_r = pads
 
     pad = 'valid'
 
     if height % 2 == 1 and width % 2 == 1 and \
-            height // 2 == padding_h and width // 2 == padding_w and \
+            height // 2 == padding_h_t == padding_h_b and width // 2 == padding_w_l == padding_w_r and \
             stride_height == 1 and stride_width == 1:
-        if padding_h > 0 or padding_w > 0:
+        if any(x > 0 for x in pads):
             pad = 'same'
             logger.debug('Use `same` padding parameters.')
     else:
-        if padding_h > 0 or padding_w > 0:
+        if any(x > 0 for x in pads):
             logger.warning('Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.')
             padding_name = keras_name + '_pad'
             padding_layer = keras.layers.ZeroPadding2D(
-                padding=(padding_h, padding_w),
+                padding=((padding_h_t, padding_h_b), (padding_w_l, padding_w_r)),
                 name=padding_name
             )
             layers[padding_name] = input_0 = padding_layer(input_0)

Onnx to keras model conversion failed

import onnx
from onnx2keras import onnx_to_keras

# Load ONNX model
onnx_model = onnx.load('/home/pat-011/Desktop/deep_learning/POC1/output/tensorflow/pytorch_onnx_tf_onnx.onnx')

# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(onnx_model, ['input'])

Error -

Traceback (most recent call last):
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/keras/engine/base_layer.py", line 279, in assert_input_compatibility
K.is_keras_tensor(x)
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 474, in is_keras_tensor
str(type(x)) + '. ' ValueError: Unexpectedly found an instance of type <class 'NoneType'>`. Expected a symbolic tensor instance.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tf-onnx-pytorch-keras.py", line 30, in
k_model = onnx_to_keras(onnx_model, ['input'])
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/onnx2keras/converter.py", line 146, in onnx_to_keras
node_name
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/onnx2keras/operation_layers.py", line 184, in convert_split
input_0 = ensure_tf_type(layers[node.input[0]])
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/onnx2keras/utils.py", line 42, in ensure_tf_type
return lambda_layer(fake_input_layer)
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/home/pat-011/.virtualenvs/onnx/lib/python3.6/site-packages/keras/engine/base_layer.py", line 285, in assert_input_compatibility
str(inputs) + '. All inputs to the layer '
ValueError: Layer lambda_1 was called with an input that isn't a symbolic tensor. Received type: <class 'NoneType'>. Full input: [None]. All inputs to the layer should be tensors.

TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContaine'

I try to convert an ONNX model to Keras, but when I call the conversion function I receive the following error message "TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'"

You can see the ONNX Model here: https://ibb.co/sKnbxWY

import onnx2keras
from onnx2keras import onnx_to_keras
import keras
import onnx

onnx_model = onnx.load('onnxModel.onnx')
k_model = onnx_to_keras(onnx_model, ['input_1'])

keras.models.save_model(k_model,'kerasModel.h5',overwrite=True,include_optimizer=True)
File "C:/../onnx2Keras.py", line 7, in <module>
    k_model = onnx_to_keras(onnx_model, ['input_1'])
  File "..\site-packages\onnx2keras\converter.py", line 80, in onnx_to_keras
    weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'

About Slice Layer

Hello,

I have a problem related to slice layer .

My model is like this
(I want to slice the ch dimension, and then concatenate them, i.e.,
I set axis attribute as 1 in slice layer in an onnx file)

截圖 2019-12-04 下午3 36 40

I add a slice layer after the first conv layer
but while converting this model to tflite model, it will show that the input of concat layer is mismatch

I print the shape of slice layer like this:

截圖 2019-12-04 下午3 35 10

For the upper part, it is normal (slice input to two tensors with 4 channels and 12 channels),

but for lower part, it slices the wrong dimension

I use the setting of change_ordering=True with onnx2keras==0.0.17

Do you have any suggestion ? Thanks a lot :)

ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 0 dimension(s)

Hello @nerox8664, below is a minimal working example for the Error I got. I want to convert a UNet to Keras and I need to concat some outputs. The concatenation seems to be the problem here. I hope you can help me.

Edit: the interpolation seems to be the problem.

class Test(nn.Module):
    def __init__(self):
        super(Test, self).__init__()
        self.pool = nn.MaxPool2d(2)
    def forward(self, x):
        x1 = self.pool(x)
        x1 = F.interpolate(x1, size=x.size()[2:], mode='nearest')
        x = torch.cat([x1, x], dim=1)
        return x

model = Test().eval().cpu()

input_np = np.random.uniform(0, 1, (1, 3, 120, 160))
input_var = torch.FloatTensor(input_np)

torch.onnx.export(model, input_var, 'test.onnx', verbose=True, opset_version=11,
                  input_names=['input'], output_names=['output'])
onnx_model = onnx.load('test.onnx')
k_model = onnx_to_keras(onnx_model=onnx_model, input_names=['input'])

change_ordering in constant layer

Hi,
My version of onnx2keras is 0.0.9

When I transform my model, I encounter following error:

File "/usr/local/lib/python3.6/dist-packages/onnx2keras/constant_layers.py", line 11, in convert_constant
if node_name['change_ordering']:
TypeError: string indices must be integers

I'm not ensure if I ignore 'change_ordering' here is ok, just want to check it

Moreover, I can transform my model successfully with 0.0.3

Thanks!

GlobalAveragePooling is converted into "Lambda"

Hi,

Thank you for this very friendly and beautiful tool.

I was trying to convert a Pytorch model that runs global-average-pooling after the last conv layer and before the first fc layer (In Pytorch's forward method, it is implemented by running the line x = x.mean([2,3])). Note that it is the same as done in Pytorch's official MobileNet-V2 implementation.

The problem is that after converting the Pytorch model into ONNX, keras2onnx converts the average operation into a Lambda layer. The problem with the lambda layer is that I cannot export it as a Tensorflow frozen graph.

The ONNX input file visualized in Netron - the operation is ReduceMean:
image

The Keras HDF5 model after importing it from ONNX using onnx2keras, visualized in Netron - the operation is Lambda with something that looks like base64 encoded content:
image

  1. Is there a way to make onnx2keras convert this operation into Keras' GlobalAveragePooling2D layer?
  2. Is it planned to implement this conversion?
  3. Will onnx2keras work differently if I replace the Pytroch x = x.mean([2,3]) command by a Pytorch AdaptiveAvgPool2d layer with a 1x1 output size?

Thanks!

InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU. Why this happen after onnx2keras?

Hi again,

I have done this steps:

onnx_model = onnx.load(FILE_PATH+"mnist_test.onnx")
k_model_onnx = onnx_to_keras(onnx_model, ['input_1'], name_policy="short")
k_model_onnx.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 28, 28, 1)]  0                                            
__________________________________________________________________________________________________
adjusted (Permute)              (None, 1, 28, 28)    0           input_1[0][0]                    
__________________________________________________________________________________________________
convolut (Conv2D)               (None, 32, 26, 26)   320         adjusted[0][0]                   
__________________________________________________________________________________________________
conv2d/I (Activation)           (None, 32, 26, 26)   0           convolut[0][0]                   
__________________________________________________________________________________________________
convolut_1 (Conv2D)             (None, 64, 24, 24)   18496       conv2d/I[0][0]                   
__________________________________________________________________________________________________
conv2d_1 (Activation)           (None, 64, 24, 24)   0           convolut_1[0][0]                 
__________________________________________________________________________________________________
conv2d_1_1_pad (ZeroPadding2D)  (None, 64, 24, 24)   0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_1_1 (MaxPooling2D)       (None, 64, 12, 12)   0           conv2d_1_1_pad[0][0]             
__________________________________________________________________________________________________
conv2d_1_2 (Permute)            (None, 12, 12, 64)   0           conv2d_1_1[0][0]                 
__________________________________________________________________________________________________
flatten/ (Reshape)              (None, None)         0           conv2d_1_2[0][0]                 
__________________________________________________________________________________________________
transfor_reshape (Reshape)      (None, 9216)         0           flatten/[0][0]                   
__________________________________________________________________________________________________
transfor (Dense)                (None, 128)          1179648     transfor_reshape[0][0]           
__________________________________________________________________________________________________
biased_t_const2 (Lambda)        (128,)               0           input_1[0][0]                    
__________________________________________________________________________________________________
biased_t (Lambda)               (None, 128)          0           transfor[0][0]                   
                                                                 biased_t_const2[0][0]            
__________________________________________________________________________________________________
dense/Id (Activation)           (None, 128)          0           biased_t[0][0]                   
__________________________________________________________________________________________________
transfor_1 (Dense)              (None, 10)           1280        dense/Id[0][0]                   
__________________________________________________________________________________________________
biased_t_1_const2 (Lambda)      (10,)                0           input_1[0][0]                    
__________________________________________________________________________________________________
biased_t_1 (Lambda)             (None, 10)           0           transfor_1[0][0]                 
                                                                 biased_t_1_const2[0][0]          
__________________________________________________________________________________________________
dense_1/ (Activation)           (None, 10)           0           biased_t_1[0][0]                 
==================================================================================================
Total params: 1,199,744
Trainable params: 1,199,744
Non-trainable params: 0
__________________________________________________________________________________________________
y_pred_onnx = k_model_onnx.predict(x_test)

Result :

Tensor("model/transfor/MatMul:0", shape=(None, 128), dtype=float32) Tensor("model/biased_t_const2/Const:0", shape=(128,), dtype=float32)
Tensor("model/transfor_1/MatMul:0", shape=(None, 10), dtype=float32) Tensor("model/biased_t_1_const2/Const:0", shape=(10,), dtype=float32)
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-16-c199c87d14d2> in <module>()
----> 1 y_pred_onnx = k_model_onnx.predict(x_test)

7 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

InvalidArgumentError:  Default MaxPoolingOp only supports NHWC on device type CPU
	 [[node model/conv2d_1_1/MaxPool (defined at <ipython-input-16-c199c87d14d2>:1) ]] [Op:__inference_predict_function_2104]

Function call stack:
predict_function

I have no idea what this mean and why it has happen.
Thanks and sorry!

ONNX to Keras Conversion Failed - Unable to Use same Padding

import onnx
import torch
from onnx2keras import onnx_to_keras

# Load ONNX model
model = onnx.load('output/pytorch_onnx.onnx')


# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(model, ['input'])

Output -

W0827 20:22:03.981888 140701784454976 pooling_layers.py:33] Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.

W0827 20:22:03.985505 140701784454976 deprecation_wrapper.py:119] From /home/pat-011/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3661: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0827 20:22:04.031678 140701784454976 pooling_layers.py:33] Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.

Model definition -

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5, 1)
        self.conv2 = nn.Conv2d(20, 50, 5, 1)
        self.fc1 = nn.Linear(4*4*50, 500)
        self.fc2 = nn.Linear(500, 10)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.max_pool2d(x, 2, 2)
        x = F.relu(self.conv2(x))
        x = F.max_pool2d(x, 2, 2)
        x = x.view(-1, 4*4*50)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.softmax(x, dim=1)
    model = Net().to(device)

conversion failure when shape contains None

Commit 322af56 introduced a bug in onnx2keras/reshape_layers.py if a shape contains None. The following change fixes it:

51c51,55
< layers[node_name] = np.array([i.value for i in input_0.shape])

aa = []
for i in input_0.shape:
    aa.append(i)
layers[node_name] = np.array(aa)
# layers[node_name] = np.array([i.value for i in input_0.shape])

Tensorflow to ONNX to Keras Model Conversion Failed

import onnx
import torch
from onnx2keras import onnx_to_keras

# Load ONNX model
model = onnx.load('ONNX/tensorflow-onnx/output/model.onnx')


# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(model, ['input'])

File "/home/pat-011/.local/lib/python3.6/site-packages/onnx2keras/converter.py", line 138, in onnx_to_keras
raise AttributeError('Current node is not in weights / model inputs / layers.')
AttributeError: Current node is not in weights / model inputs / layers.

The shape of intermediate layer doesn't sync (MNIST)

I download minst from onnx model zoo and convert to keras and show the summay.

onnx_model = onnx.load('mnist/model.onnx')
#  k_model = onnx_to_keras(onnx_model, ['Input3'])
k_model = onnx_to_keras(onnx_model, ['Input3'], name_policy='renumerate')
print(k_model.summary())

The information (output shape of intermediate layer) doesn't sync that I saw in netron.

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
Input3 (InputLayer)             [(None, 1, 28, 28)]  0
__________________________________________________________________________________________________
LAYER_1 (Conv2D)                (None, 8, 24, 24)    200         Input3[0][0]
__________________________________________________________________________________________________
LAYER_2_const2 (Lambda)         (8, 1, 1)            0           Input3[0][0]
__________________________________________________________________________________________________
LAYER_2 (Lambda)                (None, 8, 24, 24)    0           LAYER_1[0][0]
                                                                 LAYER_2_const2[0][0]
__________________________________________________________________________________________________
LAYER_3 (Activation)            (None, 8, 24, 24)    0           LAYER_2[0][0]
__________________________________________________________________________________________________
LAYER_4_pad (ZeroPadding2D)     (None, 8, 24, 24)    0           LAYER_3[0][0]
__________________________________________________________________________________________________
LAYER_4 (MaxPooling2D)          (None, 8, 12, 12)    0           LAYER_4_pad[0][0]
__________________________________________________________________________________________________
LAYER_5 (Conv2D)                (None, 16, 8, 8)     3200        LAYER_4[0][0]
__________________________________________________________________________________________________
LAYER_6_const2 (Lambda)         (16, 1, 1)           0           Input3[0][0]
__________________________________________________________________________________________________
LAYER_6 (Lambda)                (None, 16, 8, 8)     0           LAYER_5[0][0]
                                                                 LAYER_6_const2[0][0]
__________________________________________________________________________________________________
LAYER_7 (Activation)            (None, 16, 8, 8)     0           LAYER_6[0][0]
__________________________________________________________________________________________________
LAYER_8_pad (ZeroPadding2D)     (None, 16, 8, 8)     0           LAYER_7[0][0]
__________________________________________________________________________________________________
LAYER_8 (MaxPooling2D)          (None, 16, 2, 2)     0           LAYER_8_pad[0][0]
__________________________________________________________________________________________________
LAYER_9 (Reshape)               (None, 256)          0           LAYER_8[0][0]
__________________________________________________________________________________________________
LAYER_10 (Dense)                (None, 10)           2560        LAYER_9[0][0]
__________________________________________________________________________________________________
LAYER_11_const2 (Lambda)        (1, 10)              0           Input3[0][0]
__________________________________________________________________________________________________
LAYER_11 (Lambda)               (None, 10)           0           LAYER_10[0][0]
                                                                 LAYER_11_const2[0][0]
==================================================================================================
Total params: 5,960
Trainable params: 5,960
Non-trainable params: 0

Does it currently support upsample_bilinear?

Code: z = F.upsample(y, size=y.size()[2:], mode='bilinear')

Traceback (most recent call last):
File ".\test.py", line 43, in
k_model = converter.pytorch_to_keras(model, input_var, [(3, 224, 224)], verbose=True)
File "D:\Python36\lib\site-packages\pytorch2keras\converter.py", line 73, in pytorch_to_keras
verbose=verbose, change_ordering=change_ordering)
File "D:\Python36\lib\site-packages\onnx2keras\converter.py", line 174, in onnx_to_keras
keras_names
File "D:\Python36\lib\site-packages\onnx2keras\operation_layers.py", line 234, in convert_cast
if is_numpy(layers[node.input[0]]) and is_numpy(layers[node.input[1]]):
IndexError: list index (1) out of range

Edits needed to make it work for my model

I needed to do the following changes in order to make it work for tensorflow1.15 / python3.6:

  1. In /home/user/venv/tensorflow1.15/lib/python3.6/site-packages/onnx2keras/operation_layers.py:
import numpy as np

def convert_cast(node, params, layers, node_name, keras_name):
    """
    Convert Cast layer
    :param node: current operation node
    :param params: operation attributes
    :param layers: available keras layers
    :param node_name: internal converter name
    :param keras_name: resulting layer name
    :return: None
    """
    logger = logging.getLogger('onnx2keras:cast')

    if len(node.input) != 1:
        assert AttributeError('More than 1 input for cast layer.')

    if is_numpy(layers[node.input[0]]):

    ...
  1. In /home/user/venv/tensorflow1.15/lib/python3.6/site-packages/onnx2keras/reshape_layers.py:
def convert_slice(node, params, layers, node_name, keras_name):
    """
    Convert slice.
    :param node: current operation node
    :param params: operation attributes
    :param layers: available keras layers
    :param node_name: internal converter name
    :param keras_name: resulting layer name
    :return: None
    """
    logger = logging.getLogger('onnx2keras:slice')

    if len(node.input) != 1:
        raise AttributeError('Number of inputs is not equal 1 for slice layer')

    logger.debug('Convert inputs to Keras/TF layers if needed.')

    if isinstance(layers[node.input[0]], np.ndarray):
        for i in range(len(layers[node.input[0]])):
            layers[node.input[0]][i] = str(layers[node.input[0]][i])

    ...
  1. In /home/user/venv/tensorflow1.15/lib/python3.6/site-packages/onnx2keras/upsampling_layers.py:
import tensorflow as tf

def convert_upsample(node, params, layers, node_name, keras_name):
    """
    Convert upsample.
    :param node: current operation node
    :param params: operation attributes
    :param layers: available keras layers
    :param node_name: internal converter name
    :param keras_name: resulting layer name
    :return: None
    """
    logger = logging.getLogger('onnx2keras:upsample')
    logger.warning('!!! EXPERIMENTAL SUPPORT (upsample) !!!')

    if len(node.input) > 2:
        raise AttributeError('Unsupported number of inputs')

    if params['mode'].decode('utf-8') == 'linear':
        sess = tf.InteractiveSession()
        scales = layers[node.input[1]].eval()
        scale = (int(scales[2]), int(scales[3]))
        sess.close()

        upsampling = keras.layers.UpSampling2D(
            size=scale, name=keras_name, interpolation="bilinear"
        )

        layers[node_name] = upsampling(layers[node.input[0]])
    elif params['mode'].decode('utf-8') == 'nearest':
        scale = np.uint8(params['scales'][-2:])

        upsampling = keras.layers.UpSampling2D(
            size=scale, name=keras_name
        )

        layers[node_name] = upsampling(layers[node.input[0]])
    else:
        logger.error('Cannot convert non-linear/non-nearest upsampling.')
        raise AssertionError('Cannot convert non-linear/non-nearest upsampling')
  1. In /home/user/venv/tensorflow1.15/lib/python3.6/site-packages/onnx2keras/operation_layers.py:
def convert_cast(node, params, layers, node_name, keras_name):
    """
    Convert Cast layer
    :param node: current operation node
    :param params: operation attributes
    :param layers: available keras layers
    :param node_name: internal converter name
    :param keras_name: resulting layer name
    :return: None
    """
    logger = logging.getLogger('onnx2keras:cast')

    if len(node.input) != 1:
        assert AttributeError('More than 1 input for cast layer.')

    if is_numpy(layers[node.input[0]]):
        logger.debug('Cast numpy array')

        cast_map = {
            1: np.float32,
            2: np.uint8,
            3: np.int8,
            5: np.int16,
            6: np.int32,
            7: np.int64,
            9: np.bool,
            10: np.float16,
            11: np.double,
        }

        for i in range(len(layers[node.input[0]])):
            layers[node.input[0]][i] = str(layers[node.input[0]][i])
        layers[node_name] = cast_map[params['to']](layers[node.input[0]])
    else:
        input_0 = ensure_tf_type(layers[node.input[0]], name="%s_const" % keras_name)

        def target_layer(x, dtype=params['to']):
            import tensorflow as tf
            cast_map = {
                1: tf.float32,
                2: tf.uint8,
                3: tf.int8,
                5: tf.int16,
                6: tf.int32,
                7: tf.int64,
                9: tf.bool,
                10: tf.float16,
                11: tf.double,
            }
            if x.dtype=='string':
                return tf.strings.to_number(x, out_type=cast_map[dtype])
            return tf.cast(x, cast_map[dtype])

        lambda_layer = keras.layers.Lambda(target_layer, name=keras_name)
        layers[node_name] = lambda_layer(input_0)

Which onnx version is this project targeting ?

Since there are multiple breaking changes between different onnx versions i wonder which onnx version this project should/is targeting.

Slice for example switched from providing the parameter via params to providing them as inputs.
While this specific case could be handled in onnx2keras just fine for the old and new version, there are other changes that would require more work.

Missing converter for LRN

After trying to convert the AlexNet model from onnx (found here), I got the following error:

KeyError                                  Traceback (most recent call last)
 in 
      1 onnx_model = onnx.load('bvlcalexnet-9.onnx')
      2 print("onnx model loaded")
----> 3 model = onnx_to_keras(onnx_model, ['data_0'])

~/.local/lib/python3.8/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    172             logger.debug('... found all, continue')
    173 
--> 174         AVAILABLE_CONVERTERS[node_type](
    175             node,
    176             node_params,

KeyError: 'LRN'

Any ideas on how to get this to work?

load_model after torch

Hi

I am using load_model for a torch model that was converted to keras through ONNX.
keras.models.load_model(my_keras_model)
It disallows me to do it , unless I am using tf.keras 👍
I am getting this error:
raise TypeError('Keyword argument not understood:', kwarg)
TypeError: ('Keyword argument not understood:', 'module')
Why is that?

Keranify all the things.

It would be nice if the created model would be keras backend agnostic.
This can be done by replacing all currently used tensorflow code with keras.backend code.

I already ported some (easy) code and was able to convert s3fd and run it with the tensorflow and plaidml backends. Plaidml supports AMD cards which is the real reason for this.

I am going to do this work anyway, but it would be nice to have it merged into this project if possible.
If this is something you'd be interested in here is what would need to be done (and i would happily send PRs for this):

  • Replace all tensorflow backend code in lambda layers with K function alternatives.
  • Change ensure_tf_type to return K.constants
  • Set the appropriate output_shape for all Lambda layers as the guessing which is done in plaidml and theano might be wrong and in the best case leads to warnings.
  • Ensure layer names do NOT start with a digit as this trips up plaidml (at least at the moment). It might be best to only do this when requested via parameter.
    This might be solved as soon as the functionality for the name_policy is implemented .?

KeyError: 'ConstantOfShape'

Hi, I am working on the conversion of onnx to keras, until I got this error message:

...
DEBUG:onnx2keras:Check input 0 (name 73).
DEBUG:onnx2keras:... found all, continue
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-9-c9733b56662e> in <module>()
      2 
      3 # Call the converter (input - is the main model input name, can be different for your model)
----> 4 k_model = onnx_to_keras(onnx_model, input_name_list)

/usr/local/lib/python3.6/dist-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    170             logger.debug('... found all, continue')
    171 
--> 172         AVAILABLE_CONVERTERS[node_type](
    173             node,
    174             node_params,

KeyError: 'ConstantOfShape'

There is no any other indications, is there any ideas about how to solve this problem?

onnx version 1.6
onnx2keras version 0.0.18

change_ordering flag not working on a VGG19 pre-trained model

I am trying to convert a pre-trained VGG19 model from ONNX to Keras and then run it on my CPU (run = just predict, not train).

I managed to convert it with onnx2keras but then ran into issues with NCHW channel_first (3, 224, 224) v. NHWC channel_last (224, 224, 3). Here is the error I get when running the converted model with Keras (TensorFlow backend):

tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU

I went back to onnx2keras and realized there is an experimental flag change_ordering that seems to do what I needed!

Unfortunately, change_ordering does not seem to work with my model:

ValueError: Operands could not be broadcast together with shapes (224, 224, 3) (3, 224, 224)

Here is the full stack trace:

ValueError                                Traceback (most recent call last)
<ipython-input-1-3a0e0ae82f31> in <module>()
      8 
      9 
---> 10 keras_m = onnx_to_keras(onnx_m, ['input'], verbose=True, change_ordering=True)

~/Documents/dev/venv/lib/python3.7/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    212
    213 
--> 214         model_tf_ordering = keras.models.Model.from_config(conf)
    215 
    216         for dst_layer, src_layer in zip(model_tf_ordering.layers,

~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/network.py in from_config(cls, config, custom_objects)
   1030                 if layer in unprocessed_nodes:
   1031                     for node_data in unprocessed_nodes.pop(layer):
-> 1032                         process_node(layer, node_data)
   1033 
   1034         name = config.get('name')

~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/network.py in process_node(layer, node_data)
    989             # and building the layer if needed.
    990             if input_tensors:
--> 991                 layer(unpack_singleton(input_tensors), **kwargs)
    992 
    993         def process_layer(layer_data):

~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
    429                                          'You can build it manually via: '
    430                                          '`layer.build(batch_input_shape)`')
--> 431                 self.build(unpack_singleton(input_shapes))
    432                 self.built = True
    433 

~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in build(self, input_shape)
    254 
    255     def build(self, input_shape):
--> 256         super(Subtract, self).build(input_shape)
    257         if len(input_shape) != 2:
    258             raise ValueError('A `Subtract` layer should be called '

~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in build(self, input_shape)
     89                 shape = input_shape[i][1:]
     90             output_shape = self._compute_elemwise_op_output_shape(output_shape,
---> 91                                                                   shape)
     92         # If the inputs have different ranks, we have to reshape them
     93         # to make them broadcastable.

~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in _compute_elemwise_op_output_shape(self, shape1, shape2)
     59                     raise ValueError('Operands could not be broadcast '
     60                                      'together with shapes ' +
---> 61                                      str(shape1) + ' ' + str(shape2))
     62                 output_shape.append(i)
     63         return tuple(output_shape)

ValueError: Operands could not be broadcast together with shapes (224, 224, 3) (3, 224, 224)

I am using python3.7 and (all from pip): onnx==1.5.0, onnx2keras==0.0.4, Keras==2.2.4, and tensorflow==1.14.0.

Here is the code that gets me the stack trace above (on a Jupyter Notebook):

import onnx
from onnx2keras import onnx_to_keras
onnx_m = onnx.load('VGG19_PRETRAINED.onnx')
keras_m = onnx_to_keras(onnx_m, ['input'], verbose=True, change_ordering=True)

@nerox8664, thanks a lot for this (very useful!) library. I do appreciate any hints you might have about my issue.

global_avgpool2d has bug?

when i use global_avgpool2d with change_ordering=True,some error happen.
ValueError: operands could not be broadcast together with shapes (1,224,1,1) (1,3,1,1)

error converting tiny-yolov3 onnx model

I am trying to convert tiny yolov3 onnx model to keras. I get 'list out of index' error.
Code:

import onnx 
from onnx2keras import onnx_to_keras

onnx_model = onnx.load("yolov3-tiny.onnx")  # load onnx model 
k_model = onnx_to_keras(onnx_model, ['input_1', 'image_shape'])

Output:

INFO:onnx2keras:Converter is called.
DEBUG:onnx2keras:List input shapes:
DEBUG:onnx2keras:None
DEBUG:onnx2keras:List inputs:
DEBUG:onnx2keras:Input 0 -> input_1.
DEBUG:onnx2keras:Input 1 -> image_shape.
DEBUG:onnx2keras:List outputs:
DEBUG:onnx2keras:Output 0 -> yolonms_layer_1.
DEBUG:onnx2keras:Output 1 -> yolonms_layer_1:1.
DEBUG:onnx2keras:Output 2 -> yolonms_layer_1:2.
DEBUG:onnx2keras:Gathering weights to dictionary.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~/Documents/MachineLearning/tmp_env/lib/python3.6/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
     84                 onnx_extracted_weights_name = onnx_w.ListFields()[2][1]
---> 85             weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
     86         except:

TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'

During handling of the above exception, another exception occurred:

IndexError                                Traceback (most recent call last)
<ipython-input-4-588e2e05449f> in <module>
----> 1 k_model = onnx_to_keras(onnx_model, ['input_1', 'image_shape'])

~/Documents/MachineLearning/tmp_env/lib/python3.6/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
     85             weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
     86         except:
---> 87             onnx_extracted_weights_name = onnx_w.ListFields()[3][1]
     88             weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
     89 

IndexError: list index out of range

Here is the onnx model: https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny_yolov3

Python version: Python 3.6.8
ONNX version: 1.7.0 (installed from source)

input type: float32[?,w,h,c]

After I converted a pytorch-onnx model into keras .h5,
I noticed that input type: float32[?,w,h,c]

Is it correct?
what should i do?

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

Hello, I'm trying to convert my pytorch model to keras and I have ready onnx file for It. But when I started to converting onnx to keras, I've got next error:

DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 645).
DEBUG:onnx2keras:Check input 1 (name 646).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 648
DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 647).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 649
DEBUG:onnx2keras:node_params: {'to': 11, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 648).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Constant
DEBUG:onnx2keras:node_name: 650
DEBUG:onnx2keras:node_params: {'value': array(1.), 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Div
DEBUG:onnx2keras:node_name: 651
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 650).
DEBUG:onnx2keras:Check input 1 (name 649).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:div:Convert inputs to Keras/TF layers if needed.
WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Constant
DEBUG:onnx2keras:node_name: 652
DEBUG:onnx2keras:node_params: {'value': array(224.), 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Mul
DEBUG:onnx2keras:node_name: 653
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 651).
DEBUG:onnx2keras:Check input 1 (name 652).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Cast
DEBUG:onnx2keras:node_name: 654
DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 653).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Mul
DEBUG:onnx2keras:node_name: 655
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 654).
DEBUG:onnx2keras:Check input 1 (name 654).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Unsqueeze
DEBUG:onnx2keras:node_name: 657
DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 639).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:unsqueeze:Work with numpy types.
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Unsqueeze
DEBUG:onnx2keras:node_name: 659
DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 655).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Concat
DEBUG:onnx2keras:node_name: 660
DEBUG:onnx2keras:node_params: {'axis': 0, 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name 657).
DEBUG:onnx2keras:Check input 1 (name 1057).
DEBUG:onnx2keras:The input not found in layers / model inputs.
DEBUG:onnx2keras:Found in weights, add as a numpy constant.
DEBUG:onnx2keras:Check input 2 (name 659).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:concat:Concat Keras layers.
WARNING:onnx2keras:concat:!!! IMPORTANT INFORMATION !!!
WARNING:onnx2keras:concat:Something goes wrong with concat layers. Will use TF fallback.
WARNING:onnx2keras:concat:---
Traceback (most recent call last):
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 110, in convert_concat
    name=keras_name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\layers\merge.py", line 705, in concatenate
    return Concatenate(axis=axis, **kwargs)(inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
    inputs = nest.map_structure(_convert_non_tensor, inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
    return ops.convert_to_tensor(x)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
    return constant_op.constant(value, dtype, name=name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/1/PycharmProjects/untitled/onnx_.py", line 6, in <module>
    k_model = onnx2keras.onnx_to_keras(model, ["input_data"])
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\converter.py", line 177, in onnx_to_keras
    keras_names
  File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 122, in convert_concat
    layers[node_name] = lambda_layer(layer_input)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
    inputs = nest.map_structure(_convert_non_tensor, inputs)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
    structure[0], [func(*x) for x in entries],
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
    return ops.convert_to_tensor(x)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
    return constant_op.constant(value, dtype, name=name)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

There is my code:

import onnx
import onnx2keras

model = onnx.load_model("model.onnx")

k_model = onnx2keras.onnx_to_keras(model, ["input_data"])

Thank you.

[Regression] ValueError: The name "151" is used 4 times in the model. All layer names should be unique.

With the current version i am not able to export S3FD anymore.
Tested with you gist from the split issue in keras2onnx (https://gist.github.com/nerox8664/cbf70c39967b59d29f49afd9f9205a1b)
This happens regardless of setting a name_policy or not.

Traceback (most recent call last):
  File "foo.py", line 151, in <module>
    k_model = onnx_to_keras(onnx_model, ['input'])
  File "/home/nope/workspace/onnx2keras/onnx2keras/converter.py", line 205, in onnx_to_keras
    model = keras.models.Model(inputs=keras_inputs, outputs=keras_outputs)
  File "/home/nope/.local/lib/python3.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/nope/.local/lib/python3.7/site-packages/keras/engine/network.py", line 93, in __init__
    self._init_graph_network(*args, **kwargs)
  File "/home/nope/.local/lib/python3.7/site-packages/keras/engine/network.py", line 231, in _init_graph_network
    self.inputs, self.outputs)
  File "/home/nope/.local/lib/python3.7/site-packages/keras/engine/network.py", line 1455, in _map_graph_network
    ' times in the model. '
ValueError: The name "151" is used 4 times in the model. All layer names should be unique.

151 is one of the 4 outputs of the split layer. So it seems like this is realted to layers with multiple outputs.

k_model = onnx_to_keras(onnx_model, ['input_1']) : 'conv2d/Identity:0' is not a valid scope name

Hi all,
I have save a model in other project, by this way:

os.environ['TF_KERAS'] = '1'
onnx_model = keras2onnx.convert_keras(model, model.name)
temp_model_file = 'mnist_test.onnx'
onnx.save_model(onnx_model, temp_model_file)
sess = onnxruntime.InferenceSession(temp_model_file)

In another project I tried to load the same model, without sucess:


onnx_model.Clear()
onnx_model = onnx.load(FILE_PATH+"mnist_test1.onnx")
k_model = onnx_to_keras(onnx_model, ['input_1'])

Result:

WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
INFO:onnx2keras:Converter is called.
DEBUG:onnx2keras:List input shapes:
DEBUG:onnx2keras:None
DEBUG:onnx2keras:List inputs:
DEBUG:onnx2keras:Input 0 -> input_1.
DEBUG:onnx2keras:List outputs:
DEBUG:onnx2keras:Output 0 -> dense_1.
DEBUG:onnx2keras:Gathering weights to dictionary.
DEBUG:onnx2keras:Found weight dense_1/kernel:0 with shape (128, 10).
DEBUG:onnx2keras:Found weight dense_1/bias:0 with shape (10,).
DEBUG:onnx2keras:Found weight dense/kernel:0 with shape (9216, 128).
DEBUG:onnx2keras:Found weight dense/bias:0 with shape (128,).
DEBUG:onnx2keras:Found weight conv2d_1/kernel:0 with shape (64, 32, 3, 3).
DEBUG:onnx2keras:Found weight conv2d_1/bias:0 with shape (64,).
DEBUG:onnx2keras:Found weight conv2d/kernel:0 with shape (32, 1, 3, 3).
DEBUG:onnx2keras:Found weight conv2d/bias:0 with shape (32,).
DEBUG:onnx2keras:Found input input_1 with shape [28, 28, 1]
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Transpose
DEBUG:onnx2keras:node_name: adjusted_input1
DEBUG:onnx2keras:node_params: {'perm': [0, 3, 1, 2], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name input_1).
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Conv
DEBUG:onnx2keras:node_name: convolution_output1
DEBUG:onnx2keras:node_params: {'auto_pad': b'VALID', 'dilations': [1, 1], 'group': 1, 'kernel_shape': [3, 3], 'strides': [1, 1], 'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name adjusted_input1).
DEBUG:onnx2keras:Check input 1 (name conv2d/kernel:0).
DEBUG:onnx2keras:The input not found in layers / model inputs.
DEBUG:onnx2keras:Found in weights, add as a numpy constant.
DEBUG:onnx2keras:Check input 2 (name conv2d/bias:0).
DEBUG:onnx2keras:The input not found in layers / model inputs.
DEBUG:onnx2keras:Found in weights, add as a numpy constant.
DEBUG:onnx2keras:... found all, continue
DEBUG:onnx2keras:conv:Conv with bias
DEBUG:onnx2keras:conv:2D convolution
DEBUG:onnx2keras:######
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Converting ONNX operation
DEBUG:onnx2keras:type: Relu
DEBUG:onnx2keras:node_name: conv2d/Identity:0
DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
DEBUG:onnx2keras:...
DEBUG:onnx2keras:Check if all inputs are available:
DEBUG:onnx2keras:Check input 0 (name convolution_output1).
DEBUG:onnx2keras:... found all, continue
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-113-d7ead83d92a5> in <module>()
      2 onnx_model.Clear()
      3 onnx_model = onnx.load(FILE_PATH+"mnist_test1.onnx")
----> 4 k_model = onnx_to_keras(onnx_model, ['input_1'])

5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in name_scope(self, name)
   4030         # op name regex, which constrains the initial character.
   4031         if not _VALID_OP_NAME_REGEX.match(name):
-> 4032           raise ValueError("'%s' is not a valid scope name" % name)
   4033     old_stack = self._name_stack
   4034     if not name:  # Both for name=None and name="" we re-set to empty scope.

ValueError: 'conv2d/Identity:0' is not a valid scope name

Tensorflow won't install when using a python 3.8 virtualenv

Bug report

What I did

  • Tried to install Tensorflow>=2 using pipenv.
  • Tried to install Tensorflow>=2 in a normal virtualenv

What happened

  • It threw an error

Workaround?

  • Downgraded pipenv to use 3.7, Tensorflow installs fine this way.

Errror Trace

An error occurred while installing tensorflow==2.1.0 --hash=sha256:1cf129ccda0aea616b122f34b0c4bc39da959d34c4a4d8c23ed944555c5e47ab --hash=sha256:2e8fc9764b7ea87687a4c80c2fbde69aeeb459a536eb5a591938d7931ab004c2 --hash=sha256:33e4b16e8f8905ee088bf8f413dcce2820b777fdf7f799009b3a47f354ebb23f --hash=sha256:513d48dd751e0076d1b1e5e498e3522891305bedd2840f3cb4b1c57ffcb7d97d --hash=sha256:5cfa729fc71f6f2dca0ea77ebe768ea293e723e22ecb086a0b3ab26cc1776e37 --hash=sha256:7bad8ea686a1f33d9dac13eb578c4597346789d4f826980c8bbcfbd08e7dc921 --hash=sha256:8c0fae0f9f772ed7e3370f1b286f88c27debbcf09468e5036670ea2c67e239ec --hash=sha256:92c4f1c939de438fbe484d011e5eebe059fc8e5244cfe32a81c6891b3357d109 --hash=sha256:c420e70d4127c2ac00054aece54cf04a1a43d5d4f25de90267f247873f1bd5a8 --hash=sha256:e631f55cf30054fee3230c89a7f998fd08748aa3045651a5a760cec2c5b9f9d6 --hash=sha256:e877fbf373d5be42fb118269df1670b8d3c0df9be223904a2584a8f8ed23b082! Will try again.
Installing initially failed dependencies…
[pipenv.exceptions.InstallError]:   File "/home/bradley/.local/lib/python3.7/site-packages/pipenv/core.py", line 1874, in do_install
[pipenv.exceptions.InstallError]:       keep_outdated=keep_outdated
[pipenv.exceptions.InstallError]:   File "/home/bradley/.local/lib/python3.7/site-packages/pipenv/core.py", line 1253, in do_init
[pipenv.exceptions.InstallError]:       pypi_mirror=pypi_mirror,
[pipenv.exceptions.InstallError]:   File "/home/bradley/.local/lib/python3.7/site-packages/pipenv/core.py", line 859, in do_install_dependencies
[pipenv.exceptions.InstallError]:       retry_list, procs, failed_deps_queue, requirements_dir, **install_kwargs
[pipenv.exceptions.InstallError]:   File "/home/bradley/.local/lib/python3.7/site-packages/pipenv/core.py", line 763, in batch_install
[pipenv.exceptions.InstallError]:       _cleanup_procs(procs, not blocking, failed_deps_queue, retry=retry)
[pipenv.exceptions.InstallError]:   File "/home/bradley/.local/lib/python3.7/site-packages/pipenv/core.py", line 681, in _cleanup_procs
[pipenv.exceptions.InstallError]:       raise exceptions.InstallError(c.dep.name, extra=err_lines)
[pipenv.exceptions.InstallError]: ['Looking in indexes: https://pypi.python.org/simple']
[pipenv.exceptions.InstallError]: ['ERROR: Could not find a version that satisfies the requirement tensorflow==2.1.0 (from -r /tmp/pipenv-qifc1uhv-requirements/pipenv-wlgeesd2-requirement.txt (line 1)) (from versions: none)', 'ERROR: No matching distribution found for tensorflow==2.1.0 (from -r /tmp/pipenv-qifc1uhv-requirements/pipenv-wlgeesd2-requirement.txt (line 1))']

Multiple call of onnx_to_keras with change_ordering=True parameter.

Hi. I encountered the following problem when I used onnx_to_keras to convert with various shapes and using of change_ordering parameter as True. It works well for first shape, but then inside onnx_to_keras function there is line of code keras.backend.set_image_data_format('channels_last') that says to keras, that channel dimension will be last. The problem is that when we call onnx_to_keras function a second time we expect the behavior to be the same as the first call, but it is not, because keras continues to think that order is like 'channel_last', because line of code keras.backend.set_image_data_format('channels_first') applied only once when we import function or package.

What do you think about that? Should the function have the same behavior on every call? I think it make sense to fix this because most often the conversion is used to further save the converted model and its further use. And the user using the change_ordering parameter should know that the model will use the 'channel_last' order and will be able to do it personally. But a person reusing the onnx_to_keras function may encounter conversion errors of the same model.

Error while using change_ordering=True

I have a pretrained efficientnet in pytorch. I need to convert it to .h5 format. Both onnx2keras and pytorch2keras seem to work for default change_ordering. When change_ordering=True, I see the following error from both onnx2keras and pytorch2keras.

Operands could not be broadcast together with shapes (48, 1, 48) (112, 112, 48)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-64ccc7e60a69> in <module>
      3 
      4 onnx_model = onnx.load('/mnt/efficientnet_1.onnx')
----> 5 keras_model = onnx_to_keras(onnx_model, ['test_in1', 'test_in2'],change_ordering=True)

/anaconda/envs/image/lib/python3.6/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    234 
    235         keras.backend.set_image_data_format('channels_last')
--> 236         model_tf_ordering = keras.models.Model.from_config(conf)
    237 
    238         for dst_layer, src_layer in zip(model_tf_ordering.layers, model.layers):

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in from_config(cls, config, custom_objects)
    904     """
    905     input_tensors, output_tensors, created_layers = reconstruct_from_config(
--> 906         config, custom_objects)
    907     model = cls(inputs=input_tensors, outputs=output_tensors,
    908                 name=config.get('name'))

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in reconstruct_from_config(config, custom_objects, created_layers)
   1850       if layer in unprocessed_nodes:
   1851         for node_data in unprocessed_nodes.pop(layer):
-> 1852           process_node(layer, node_data)
   1853 
   1854   input_tensors = []

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in process_node(layer, node_data)
   1797       if not isinstance(input_tensors, dict) and len(flat_input_tensors) == 1:
   1798         input_tensors = flat_input_tensors[0]
-> 1799       output_tensors = layer(input_tensors, **kwargs)
   1800 
   1801       # Update node index map.

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    815           # Build layer if applicable (if the `build` method has been
    816           # overridden).
--> 817           self._maybe_build(inputs)
    818           cast_inputs = self._maybe_cast_inputs(inputs)
    819 

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
   2139         # operations.
   2140         with tf_utils.maybe_init_scope(self):
-> 2141           self.build(input_shapes)
   2142       # We must set self.built since user defined build functions are not
   2143       # constrained to set self.built.

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in wrapper(instance, input_shape)
    304     if input_shape is not None:
    305       input_shape = convert_shapes(input_shape, to_tuples=True)
--> 306     output_shape = fn(instance, input_shape)
    307     # Return shapes from `fn` as TensorShapes.
    308     if output_shape is not None:

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/merge.py in build(self, input_shape)
    109       else:
    110         shape = input_shape[i][1:]
--> 111       output_shape = self._compute_elemwise_op_output_shape(output_shape, shape)
    112     # If the inputs have different ranks, we have to reshape them
    113     # to make them broadcastable.

/anaconda/envs/image/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/merge.py in _compute_elemwise_op_output_shape(self, shape1, shape2)
     80           raise ValueError(
     81               'Operands could not be broadcast '
---> 82               'together with shapes ' + str(shape1) + ' ' + str(shape2))
     83         output_shape.append(i)
     84     return tuple(output_shape)

ValueError: Operands could not be broadcast together with shapes (48, 1, 48) (112, 112, 48)

I have the following versions installed:

Python version:
3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29)
[GCC 7.3.0]
ONNX version:
1.6.0
Pytorch2Keras version:
0.2.3
ONNX2Keras version:
0.0.18
Tensorflow version:
2.0.0
Pytorch version
1.2.0

AttributeError: 'ParsedRequirement' object has no attribute 'req'

I'm trying to install onnx2keras module but while installing I got error. I tried installing using both pip and source directory. Error is always same. Is there any dependency I've to install?

Traceback (most recent call last): File "setup.py", line 16, in <module> reqs = [str(ir.req) for ir in install_reqs] File "setup.py", line 16, in <listcomp> reqs = [str(ir.req) for ir in install_reqs]
AttributeError: 'ParsedRequirement' object has no attribute 'req'

tensorflow version problem

pip3 install --upgrade onnx2keras
Looking in indexes: https://bytedpypi.byted.org/simple, https://mirrors.aliyun.com/pypi/simple
Collecting onnx2keras
  Downloading https://bytedpypi.byted.org/tos/pkg/pypi/onnx2keras/onnx2keras-0.0.17.tar.gz
     / 40kB 35.5MB/s
Collecting tensorflow>=2.0 (from onnx2keras)

I get an error saying that tensorflow2.0 is not installed, but I installed it:

Package              Version              Location                                              
-------------------- -------------------- ------------------------------------------------------
absl-py              0.9.0                
astor                0.8.1                
audioread            2.1.6                
cffi                 1.12.2               
cloudpickle          0.8.0                
cycler               0.10.0               
decorator            4.3.2                
falcon               1.4.1                
future               0.17.1               
gast                 0.3.2                
google-pasta         0.1.8                
grpcio               1.26.0               
h5py                 2.9.0                
horovod              0.16.0               
inflect              2.1.0                
joblib               0.13.2               
Keras                2.3.1                
Keras-Applications   1.0.7                
Keras-Preprocessing  1.0.9                
kiwisolver           1.0.1                
librosa              0.6.3                
llvmlite             0.27.1               
Markdown             3.1.1                
matplotlib           3.0.3                
mock                 2.0.0                
numba                0.42.1               
numpy                1.16.2               
onnx                 1.5.0                
onnx2keras           0.0.13               
opencv-python        4.1.2.30             
pandas               0.24.1               
pbr                  5.1.3                
Pillow               5.4.1                
pip                  18.1                 
protobuf             3.11.2               
psutil               5.6.1                
pycparser            2.19                 
pyparsing            2.3.1                
python-dateutil      2.8.0                
python-mimeparse     1.6.0                
pytz                 2018.9               
PyYAML               3.13                 
resampy              0.2.1                
scikit-learn         0.20.3               
scipy                1.2.1                
setuptools           42.0.2               
six                  1.12.0               
sklearn              0.0                  
tb-nightly           1.14.0a20190301      
tensorboardX         1.9                  
tensorflow           2.0.0a0              
termcolor            1.1.0                
tf-estimator-nightly 1.14.0.dev2019030115 
torch                1.0.1.post2          
torchvision          0.2.2.post3          
tqdm                 4.31.1               
typing               3.6.6                
typing-extensions    3.7.4.1              
Unidecode            1.0.22               
Vizer                0.1.5                
Werkzeug             0.16.0               
wheel                0.32.3               
wrapt                1.11.2               
yacs                 0.1.6 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.