Coder Social home page Coder Social logo

onnx-tensorflow's Introduction

TensorFlow Backend for ONNX

Backend Test Status ModelZoo Test Status

Note this repo is not actively maintained and will be deprecated. If you are interested in becoming the owner, please contact the ONNX Steering Committee (https://github.com/onnx/steering-committee).

Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools.

TensorFlow Backend for ONNX makes it possible to use ONNX models as input for TensorFlow. The ONNX model is first converted to a TensorFlow model and then delegated for execution on TensorFlow to produce the output.

This is one of the two TensorFlow converter projects which serve different purposes in the ONNX community:

Converting Models from ONNX to TensorFlow

Use CLI

Command Line Interface Documentation

From ONNX to TensorFlow: onnx-tf convert -i /path/to/input.onnx -o /path/to/output

Convert Programmatically

From ONNX to TensorFlow

Migrating from onnx-tf to tf-onnx

We have joined force with Microsoft to co-develop ONNX TensorFlow frontend. For current onnx-tf frontend users, please migrate to use tf-onnx (https://github.com/onnx/tensorflow-onnx) where our code had been merged into.

ONNX Model Inference with TensorFlow Backend

import onnx
from onnx_tf.backend import prepare

onnx_model = onnx.load("input_path")  # load onnx model
output = prepare(onnx_model).run(input)  # run the loaded model

More Tutorials

Running an ONNX model using TensorFlow

Production Installation

ONNX-TF requires ONNX (Open Neural Network Exchange) as an external dependency, for any issues related to ONNX installation, we refer our users to ONNX project repository for documentation and help. Notably, please ensure that protoc is available if you plan to install ONNX via pip.

The specific ONNX release version that we support in the master branch of ONNX-TF can be found here. This information about ONNX version requirement is automatically encoded in setup.py, therefore users needn't worry about ONNX version requirement when installing ONNX-TF.

To install the latest version of ONNX-TF via pip, run pip install onnx-tf.

Because users often have their own preferences for which variant of TensorFlow to install (i.e., a GPU version instead of a CPU version), we do not explicitly require tensorflow in the installation script. It is therefore users' responsibility to ensure that the proper variant of TensorFlow is available to ONNX-TF. Moreover, we require TensorFlow version == 2.8.0.

Development

Coverage Status

ONNX-TensorFlow Op Coverage Status

API

ONNX-TensorFlow API

Installation

  • Install ONNX master branch from source.
  • Install TensorFlow >= 2.8.0, tensorflow-probability and tensorflow-addons. (Note TensorFlow 1.x is no longer supported)
  • Run git clone https://github.com/onnx/onnx-tensorflow.git && cd onnx-tensorflow.
  • Run pip install -e ..

Folder Structure

  • onnx_tf: main source code file.
  • test: test files.

Code Standard

  • Format code
pip install yapf
yapf -rip --style="{based_on_style: google, indent_width: 2}" $FilePath$
  • Install pylint
pip install pylint
wget -O /tmp/pylintrc https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/tools/ci_build/pylintrc
  • Check format
pylint --rcfile=/tmp/pylintrc myfile.py

Documentation Standard

Google Style Python Docstrings

Testing

Unit Tests

To perfom unit tests:

pip install pytest tabulate
python -m unittest discover test

Note: Only the ONNX backend tests found in test_onnx_backend.py require the pytest and tabulate packages.

Testing requires significant hardware resources, but nonetheless, we highly recommend that users run through the complete test suite before deploying onnx-tf. The complete test suite typically takes between 15 and 45 minutes to complete, depending on hardware configurations.

Model Zoo Tests

The tests in test_modelzoo.py verify whether the ONNX Model Zoo models can be successfully validated against the ONNX specification and converted to a TensorFlow representation. Model inferencing on the converted model is not tested currently.

Prerequisites

The model zoo uses Git LFS (Large File Storage) to store ONNX model files. Make sure that Git LFS is installed on your operating system.

Running

By default, the tests assume that the model zoo repository has been cloned into this project directory. The model zoo directory is scanned for ONNX models. For each model found: download the model, convert the model to TensorFlow, generate a test status, and delete the model. By default, the generated test report is created in the system temporary directory. Run python test/test_modelzoo.py -h for help on command line options.

git clone https://github.com/onnx/models
python test/test_modelzoo.py

Testing all models can take at least an hour to complete, depending on hardware configuration and model download times. If you expect to test some models frequently, we recommend using Git LFS to download those models before running the tests so the large files are cached locally.

Reports

When making code contributions, the model zoo tests are run when a commit is merged. Generated test reports are published on the onnx-tensorflow wiki.

onnx-tensorflow's People

Contributors

arpith-jacob avatar azraelkuan avatar bddppq avatar chinhuang007 avatar chudegao avatar djsutherland avatar fumihwh avatar grimoire avatar iurilgit avatar jacenfox avatar krishnannuance avatar lucasmahieu avatar marload avatar njanakiev avatar pluradj avatar ruimashita avatar sand3r- avatar sdmonov avatar seanshpark avatar shahirdaya avatar shubhamugare avatar silfverstrom avatar talc23 avatar tedhtchang avatar tengyifei avatar tjingrant avatar tkng avatar weikexin avatar winnietsang avatar winston-zillow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-tensorflow's Issues

About the 'axis' transformation

Hi guys,

I have a question about the axis transformation. In ONNX, the main dataflow is "channel_first", but in support_cuda = false case, tensorflow main dataflow is "channel_last", which means the 'axis' in 'Argmax', 'Argmin', 'Concat' ... needs to be transformed, like "1" --> "3". But I can't find related transformation in code. Do I miss anything?

And the transformation is not needed in NLP network. How to detect if the axis transformation is needed in CNN & RNN mixed network?

Thanks.

BR,
Kit

setup.sh never run?

Is there any command for running setup.sh?
Maybe it's the reason always makes test fail at python2.7.

Example job:
https://travis-ci.org/onnx/onnx-tensorflow/jobs/354137884

Log:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
./.travis/build.sh: line 12: 10259 Aborted                 (core dumped) python -m unittest discover test/backend/

Adds training (gradient) pass

Hi,

I'm interested to train ONNX model on TensorFlow.
However this project supports only inference pass (forward pass) for now.

I know that ONNX supports only inference ops now.
I also know that ONNX is going to support training ops in future.
https://github.com/onnx/onnx/wiki/%5BAnnouncement%5D-ONNX-working-groups-established#training
So, the converter must make gradient graphs from inference graphs for now.

Is there any schedule for supporting training pass (gradient pass) in future?
And adding training pass is contribution welcomed?

Thanks

Conv Kernel Shape

@fumihwh I'm working on the tutorial but it seems like we have some problem recognizing the filter kernel size in conv2d.

Tensorflow filter size is specified as such (https://www.tensorflow.org/api_docs/python/tf/nn/conv2d):
[filter_height, filter_width, in_channels, out_channels]

Thus seems like we should be taking the first d elements instead of the current approach. Maybe we should change it?

[Front-End] How to deals with data_format : 'NHWC' (tf) vs. 'NCHW' (onnx)

I would like to have idea from the community to implement the conversion from tensorflow data_format 'NHWC' to onnx data_format 'NCHW'.

Should we have to add an op node of type transpose ?
If yes, the classmethod handler_name seems to not be very adapted to add 2 nodes into the "ops_proto" list ...

tf.get_default_graph() does not work at first makes frontend test failed

When I run pytest locally, faced a problem that frontend test always failed just with running backend test together.
If I run frontend test along, it passed.

So I debugged and found tf.reset_default_graph() is run at end of each frontend test case not at first.
I suggest put tf.reset_default_graph() at first just under def do_test_expected(self):.

Order for mutiple outputs

Looking at these lines:

external_output = dict(filter(lambda kv: kv[0] in self.predict_net.external_output, list(self.predict_net.output_dict.items())))
output_values = sess.run(list(external_output.values()), feed_dict=feed_dict)

Order of output is not determenistic prior to python 3.7 since dict can store and return values in any order.

Replacing dict with OrderedDict should fix this.

How to get output from an intermediate layer?

I have been reading the source code for a while now but cannot identify how to get output from intermediate layers given its name. [It is not mentioned in the documentation]
I can only use the run function which make a forward pass through the whole network.
Any suggestions?
Thanks.

some problem about node.attr in frontend

After frontend.py L149, I get such node.attr as

{'padding': s: "SAME"
, 'data_format': s: "NHWC"
, 'strides': list {
  i: 1
  i: 1
  i: 1
  i: 1
}
, 'use_cudnn_on_gpu': b: true
}

If I just pass it to make_node in helper.py, I get

{ValueError}Value "s: "NHWC"
" is not valid attribute data type.

So I try to get value before make_node: node.attr = dict(map(lambda item: (item[0], get_attribute_value(item[1])), node.attr.items())),
then I get

{ValueError}Protocol message has no non-repeated submessage field "t"

So there are two problems

  • current frontend.py can not handle attr structure (maybe changed?) -> can be fixed with my code above
  • how to deal with different behavior between tf protobuf3 and onnx protobuf2 (HasField in protobuf3 will cause error) -> create get_attribute_value in onnx-tensorflow or make onnx get_attribute_value let it can deal with both protobuf version

Reshape Input for Dynamic Batch Size

Hey guys, as I have written on this issue before, is there any way we can modify the input shape so that it can be more dynamic? Not only the batch_size though, if we use RNN for example, we might want to keep the length size dynamic as well

pass additional args to handlers

Currently we pass consts to handlers in frontend, but in some case, we need more to deal with ops.
For example, at least we need input shape to calculate pad size for pooling and conv because SAME and VALID is deprecated.
So I have to pass all ops' _output_shape to handlers.

Then, there are three questions:

  1. do we really need additional args
  2. if 1 is yes, what should we pass to handlers
  3. how to do that in a smart way

Updates necessary for newly added test cases

We're shooting for ONNX v1 announcement at NIPS next week and thus it'd be awesome to make sure that all tests are passing with the latest ONNX master.

We added a few backend test cases upstream in ONNX (by exporting some of the PyTorch's tests) and some of them are not passing the tests in TF converter. Those tests do pass Caffe2 backend but they might still contain deviations from the spec. If you find some discrepancies - let us know.

I haven't dug into details yet, but it seems that there are some shape mismatches from looking at the contbuild.

@tjingrant - would you be able to take a look?

Issue in test_node of front-end

Hi,
I think that there is a bad thing appending with test_node of the front-end.

In ONNX, inputs and ops are thought to be in order : 'NCHW'

But in Tensorflow, data are often in order 'NHWC'. (for some tf operations, only 'NHWC' is supported, and for other operations is it impossible to know the data_format (Placeholers for example)).
So, we can imagine to require that user puts all his tensorflow operation in 'NHWC' data format.

In that case, we should consider only inputs in 'NHWC' in the test_node of tensorflow.
That is not the case...

Duplicate source codes

In backend.py, frontend.py, I find there are a lot duplicate source code which are also in common.py. It is not good for maintenance.
I just want to keep them in common.py and remove unneeded one in backend.py, frontend.py.

When this done, I think I could implement some ops(handlers) supported in onnx but not here yet.

Wrong dimension from pytorch ResNet architecture

After exporting the model through onnx from pytorch, I try to load it using onnx-tensorflow. However, this error occured

ValueError: Dimensions must be equal, but are 8192 and 2048 for 'MatMul_1' (op: 'MatMul') with input shapes: [1,8192], [2048,2].

This isn't happened in onnx-caffe2.
Below are the way I loaded it

import onnx
import onnx_tf.backend as backend
model = onnx.load("model.proto")
rep = backend.prepare(model)

The same code works fine in onnx-caffe2

import onnx
import onnx_caffe2.backend as backend
model = onnx.load("model.proto")
rep = backend.prepare(model)

Am I missing something? Or some of the operators hasn't supported yet?
You can see the resnet code from here https://github.com/pytorch/vision/blob/v0.1.9/torchvision/models/resnet.py

onnx version 1.0.0
onnx-tf version 1.0.0

UPDATE
tried it with onnx==0.2 as well, still no luck

[TODO list] All things we have to add/improve

  • Add the simple split op (not the split_v) which permits to split a tensor in 'n' parts along 'axis'.
  • Support the front-end test case : squeeze(input) with intput.shape = [1,1,1,1] (in that case, shape is not defined into the TF graph for the output node.
  • Improve Front-end test coverage (more cases)
  • Improve Back-end test coverage (more cases)
  • Add front end support for convolution (@tjingrant)

Support kernel_shape in Conv operator

Right now I am getting a bunch of warnings:

/Users/terrytangyuan/anaconda3/lib/python3.6/site-packages/onnx_tf/backend.py:677: UserWarning: Unsupported kernel_shape attribute by Tensorflow in Conv operator. The attribute will be ignored.
  UserWarning)

When I was trying to run a couple onnx models from onnx-models repo, e.g. resnet50 and vgg19.

How to import ONNX file to Tensorflow Serving

Hello everyone!

Now,I use pytorch to write a mnist and exports the model as a .proto file via ONNX. How can I import this .proto file into Tensorflow Serving and use Tensorflow Serving provide service.

Hope friends know how to solve this problem. Thank you!

remove consts/inputs after use them as attr in frontend

In case input in TF which is attr in onnx, such as handle_reshape:

  @classmethod
  def handle_reshape(cls, node, consts):
    assert node.inputs[1] in consts.keys()
    shape = consts[node.inputs[1]]
    return helper.make_node("Reshape",
                            [node.inputs[0]],
                            [node.name],
                            shape=shape)

After use it as an attr of node Reshape, it should not be a initializer or input anymore.
We should delete it from graph. (remove them from inputs_proto and consts_proto)

should use specialized handler before default one

As title says, I think we should use specialized hander before default one.

This problem occurs when I want to remove onnx_tf_op_map in backend.py and use ONNX_OP_TO_TF_OP in common.py instead.
Because ONNX_OP_TO_TF_OP has concat which is not in onnx_tf_op_map.
So when I run test, backend.py will use default handler (handle_trivial) instead of handle_concat.
In handle_trivial, by using attr_map will convert axis to dim, which is not an attr of tf.concat.

Moreover, we also should use ONNX_TF_PER_OP_ATTR_MAP instead of ONNX_ATTR_TO_TF_ATTR.
ONNX_TF_PER_OP_ATTR_MAP is much more robust.

def _onnx_node_to_tensorflow_op(cls, node, input_dict):
op_name_lowered = cls.op_name_to_lower(node.op_type)
if op_name_lowered in cls.onnx_tf_op_map.keys():
return cls.handle_trivial(node, input_dict)
handler_name = "handle_" + op_name_lowered
# Check if specialized handler exists.
if handler_name in dir(cls):
method_to_call = getattr(cls, handler_name)
return method_to_call(node, input_dict)
else:
raise NotImplementedError("{} op is not implemented.".format(node.op_type))

elif node.op in TF_OP_STR_TO_ONNX_OP.keys():
# Remove tensorflow-specific attrs that are not
# needed/allowed in ONNX.
attr_to_remove = ["_output_shapes", "T", "seed2", "Tidx"]
node.attr = dict(filter(lambda pair: pair[0]
not in attr_to_remove, node.attr.items()))
node_output = node.name
ops_proto.append(make_node(TF_OP_STR_TO_ONNX_OP[node.op],
node.inputs,
[node_output],
name=node.name,
**node.attr))
else:
handler_name = "handle_" + op_name_to_lower(node.op)
# Check if specialized handler exists.
if handler_name in dir(cls):
method_to_call = getattr(cls, handler_name)
ops_proto.append(method_to_call(node, consts))
else:
raise NotImplementedError("{} op is not implemented.".format(node.op))

failed in CI testing

There are two reasons.

  • lack of ops
  • result not match

Actually, I am working on them and already done the first one.

support opset

ONNX is using opset to version ops, which is not supported in onnx-tensorflow.
Unless support opset here, we will loss a lot of compatibilities.
One idea is separate handlers to another files - make one file per op.
In file, we have both frontend, backend versioned handlers.
For example,

class AddHandler(...):
   
   opsets = [1, 3]
  
   def __init__(self, opset, ...):
      all_opsets = order(opsets + [opset])
      opset = all_opsets[len(all_opsets) - 1 - all_opsets[::-1].index(opset) - 1]

   def get_backend_handler():
      return getattr(cls, 'backend_' + opset)

   def get_frontend_handler():
      return getattr(cls, 'frontend_' + opset)

   def backend_1():
      .....
   
   def backend_3():
      ....

   def frontend_1():
      .....
   
   def frontend_3():
      ....

How about this?

Frontend support

Hi,
I just would know if you plan to develop more complete Frontend of TF (TF -> ONNX) ?
With some tests and examples.

description misleads

Tensorflow Backend for ONNX mislead people.
Even member from onnx create a new repository tensorflow-onnx for converting TensorFlow models to ONNX.
Maybe we should change the description.

Force NCHW format

Hi there, this seems to add a bunch of transpose ops into the graph, can we force it to build the graph in NCHW format?

code style

I know in README, we recommend pylint with tensorflow code style.
But in code, indent with 4 spaces also exist......
We should unify it. A simple way is use autopep8. Although it is different with tensorflow code style.

Super-resolution network produces incorrect results

Hey folks,

I'm writing up a tutorial for importing an ONNX graph to TF using this library. I'm using a super-resolution network as an example. Here's my draft of the tutorial: https://gist.github.com/jamesr66a/7c2c18e1479d086d28f92217975111ec

Unfortunately, when I run the Tensorflow imported network, I get a bizarre image result:

Before
image

After
image

Not quite sure what's going on here, but here's visualizations of the network topologies:

ONNX model I imported
super_resolution

Tensorflow model that was produced
image

The ONNX serialized network and source image (respectively) can be found here: https://github.com/onnx/tutorials/blob/master/tutorials/assets/super_resolution.onnx https://github.com/onnx/tutorials/blob/master/tutorials/assets/super-res-input.jpg. Additionally we have an mxnet tutorial that produces the correct result you can look at for comparison: https://github.com/onnx/tutorials/blob/master/tutorials/OnnxMxnetImport.ipynb

Could you all take a look?

LSTM support?

Hi,

I'm trying to convert a very simple LSTM from Pytorch to Tensorflow via ONNX, but I'm getting an error in the onnx-tensorflow prepare function.

Are LSTMs supported by onnx-tensorflow? If no, why not, and how would I go about adding them?

Best,
Alexander.

Error message:

Traceback (most recent call last):
  File "convert_onnx_tf.py", line 19, in <module>
    tf_rep = prepare(model)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx_tf/backend.py", line 385, in prepare
    super(TensorflowBackend, cls).prepare(model, device, **kwargs)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx/backend/base.py", line 53, in prepare
    onnx.checker.check_model(model)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx/checker.py", line 32, in checker
    proto.SerializeToString(), ir_version)
onnx.onnx_cpp2py_export.checker.ValidationError: Output size 3 not in range [min=1, max=2].

==> Context: Bad node spec: input: "0" input: "11" input: "15" input: "24" input: "" input: "1" input: "2" output: "25" output: "26" output: "27" op_type: "LSTM" attribute { name: "hidden_size" i: 3 type: INT } doc_string: "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(408): wrapper\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/rnn.py(315): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/rnn.py(181): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(345): _slow_forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(355): __call__\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/jit/__init__.py(284): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(357): __call__\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/jit/__init__.py(251): trace\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(132): _export\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(83): export\n/Users/koller/Documents/workspace/onnx_to_tensorflow/lstm.py(36): <module>\n"

Output produced by torch.onnx.export(lstm, (inputs,hidden), "lstm.onnx", verbose=True):

graph(%0 : Float(5, 1, 3)
      %1 : Float(1, 1, 3)
      %2 : Float(1, 1, 3)
      %3 : Float(12, 3)
      %4 : Float(12, 3)
      %5 : Float(12)
      %6 : Float(12)) {
  %7 : UNKNOWN_TYPE = Undefined(), scope: LSTM
  %8 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%3), scope: LSTM
  %9 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%3), scope: LSTM
  %10 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%3), scope: LSTM
  %11 : UNKNOWN_TYPE = Concat[axis=0](%8, %9, %10), scope: LSTM
  %12 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%4), scope: LSTM
  %13 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%4), scope: LSTM
  %14 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%4), scope: LSTM
  %15 : UNKNOWN_TYPE = Concat[axis=0](%12, %13, %14), scope: LSTM
  %16 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%5), scope: LSTM
  %17 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%5), scope: LSTM
  %18 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%5), scope: LSTM
  %19 : UNKNOWN_TYPE = Concat[axis=0](%16, %17, %18), scope: LSTM
  %20 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%6), scope: LSTM
  %21 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%6), scope: LSTM
  %22 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%6), scope: LSTM
  %23 : UNKNOWN_TYPE = Concat[axis=0](%20, %21, %22), scope: LSTM
  %24 : UNKNOWN_TYPE = Concat[axis=0](%19, %23), scope: LSTM
  %25 : Float(5, 1, 3), %26 : Float(1, 1, 3), %27 : Float(1, 1, 3) = LSTM[hidden_size=3](%0, %11, %15, %24, %7, %1, %2), scope: LSTM
  return (%25, %26, %27);
}

The LSTM itself is the first example from http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html

[Front-end] Concat tf to onnx

Hi,
I trying to implement concat op from TF to ONNX graph.
I am facing a "big" problem because TF concat API is : tf.concat([T0, T1, ...], axis)
and axis is considered as an input.

But TF do not puts the value of inputs in the tf graph.
So it is not possible to get the axis value in order to put it into the axis attr of onnx graph...

Did you ever faced this king of issue ?
Does someone have some ideas to solve it ?

NOTE : LucasMahieu@7eedae1
This is my work for now about the concat op

saving to TF model

Not really an issue sorry. But I was wondering if it would be possible to use this to load an onnx model and save it with TF as the TF .meta format (export_meta_graph)?

I gave it a quick try, but it seems it does set up the default TF graph.

Thanks

output dtype checker (caster?)

onnx starts checking outputs' dtype. We should deal with that.
And one issue should be clarify. onnx checks ref_outputs' dtype with backend outputs'.
I will open an issue that I think they should check with defs(schema) not ref_outputs. Because some op has constraints of multiple dtypes. It should not be error in case that output dtype different with ref_output dtype but in constraints.

UPDATE:
given inputs types, the outputs types are determined
The outputs types should follow inputs types. And some special case, e.g. TopK's indices is int64 which in tf is int32.
@tjingrant

No test discovered?

$  ~/PycharmProjects/onnx-tensorflow > python -m unittest discover test

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Cut a release matching onnx-v1

We're shooting for ONNX v1 announcement at NIPS next week and it makes sense to produce matching artifacts to PyPI. The branch is at https://github.com/onnx/onnx/tree/rel-1.0 and even though some secondary tests are failing (#17) the original set of model tests does pass.

This is just a heads up. Would you be able to help cut the release early next week (by Dec 5th).

how to deal with combinatorial ops

For example, tf has rsqrt op. which is just 1/sqrt(x). But onnx has sqrt and reciprocal only.
Do we need to support such op as following?

  @classmethod
  def handle_rsqrt(cls, node, **kwargs):
    sqrt_node = helper.make_node("Sqrt",
                                 [node.inputs[0]],
                                 [node.name.replace("Rsqrt", "Sqrt")])
    reciprocal_node = helper.make_node("Reciprocal",
                                       [sqrt_node.output[0]],
                                       [node.name.replace("Rsqrt", "Reciprocal")])
    identity_node = helper.make_node("Identity",
                                     [reciprocal_node.output[0]],
                                     [node.name])
    return [sqrt_node, reciprocal_node, identity_node]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.