Coder Social home page Coder Social logo

onnx-tensorrt's Introduction

TensorRT Backend For ONNX

Parses ONNX models for execution with TensorRT.

See also the TensorRT documentation.

For the list of recent changes, see the changelog.

For a list of commonly seen issues and questions, see the FAQ.

For business inquiries, please contact [email protected]

For press and other inquiries, please contact Hector Marinez at [email protected]

Supported TensorRT Versions

Development on the main branch is for the latest version of TensorRT 10.0 with full-dimensions and dynamic shape support.

For previous versions of TensorRT, refer to their respective branches.

Supported Operators

Current supported ONNX operators are found in the operator support matrix.

Installation

Dependencies

Building

For building within docker, we recommend using and setting up the docker containers as instructed in the main TensorRT repository to build the onnx-tensorrt library.

Once you have cloned the repository, you can build the parser libraries and executables by running:

cd onnx-tensorrt
mkdir build && cd build
cmake .. -DTENSORRT_ROOT=<path_to_trt> && make -j
# Ensure that you update your LD_LIBRARY_PATH to pick up the location of the newly built library:
export LD_LIBRARY_PATH=$PWD:$LD_LIBRARY_PATH

Note that this project has a dependency on CUDA. By default the build will look in /usr/local/cuda for the CUDA toolkit installation. If your CUDA path is different, overwrite the default path by providing -DCUDA_TOOLKIT_ROOT_DIR=<path_to_cuda_install> in the CMake command.

To build with protobuf-lite support, add -DUSE_ONNX_LITE_PROTO=1 to the end of the cmake command.

InstanceNormalizaiton Performance

There are two implementations of InstanceNormalization that may perform differently depending on various parameters. By default, the parser will use the native TensorRT implementation of InstanceNorm. Users that want to benchmark using the plugin implementation of InstanceNorm can unset the parser flag kNATIVE_INSTANCENORM prior to parsing the model. Note that the plugin implementation cannot be used for building version compatible or hardware compatible engines, and attempting to do so will result in an error.

C++ Example:

// Unset the kNATIVE_INSTANCENORM flag to use the plugin implementation.
parser->unsetFlag(nvonnxparser::OnnxParserFlag::kNATIVE_INSTANCENORM);

Python Example:

// Unset the NATIVE_INSTANCENORM flag to use the plugin implementation.
parser.clear_flag(trt.OnnxParserFlag.NATIVE_INSTANCENORM)

Executable Usage

There are currently two officially supported tools for users to quickly check if an ONNX model can parse and build into a TensorRT engine from an ONNX file.

For C++ users, there is the trtexec binary that is typically found in the <tensorrt_root_dir>/bin directory. The basic command of running an ONNX model is:

trtexec --onnx=model.onnx

Refer to the link or run trtexec -h for more information on CLI options.

For Python users, there is the polygraphy tool. The basic command for running an onnx model is:

polygraphy run model.onnx --trt

Refer to the link or run polygraphy run -h for more information on CLI options.

Python Modules

Python bindings for the ONNX-TensorRT parser are packaged in the shipped .whl files.

TensorRT 10.0 supports ONNX release 1.16.0. Install it with:

python3 -m pip install onnx==1.16.0

The ONNX-TensorRT backend can be installed by running:

python3 setup.py install

ONNX-TensorRT Python Backend Usage

The TensorRT backend for ONNX can be used in Python as follows:

import onnx
import onnx_tensorrt.backend as backend
import numpy as np

model = onnx.load("/path/to/model.onnx")
engine = backend.prepare(model, device='CUDA:1')
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)

C++ Library Usage

The model parser library, libnvonnxparser.so, has its C++ API declared in this header:

NvOnnxParser.h

Tests

After installation (or inside the Docker container), ONNX backend tests can be run as follows:

Real model tests only:

python onnx_backend_test.py OnnxBackendRealModelTest

All tests:

python onnx_backend_test.py

You can use -v flag to make output more verbose.

Pre-trained Models

Pre-trained models in ONNX format can be found at the ONNX Model Zoo

onnx-tensorrt's People

Contributors

asfiyab-nvidia avatar bddppq avatar benbarsdell avatar caenorst avatar dsandler-bos-msk avatar fandyushin avatar goldgeisser avatar guotang avatar ilyasher avatar juhyung-son avatar jywu-msft avatar kellensunderland avatar kevinch-nv avatar leimao avatar lucasjinreal avatar maratyszcza avatar nachovizzo avatar pranavm-nvidia avatar qiubit avatar rajeevsrao avatar rdzhabarov avatar simengliu-nv avatar slxnjrb avatar smessmer avatar szha avatar tdp2110 avatar ymaniyar-nv avatar yuanyao-nv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-tensorrt's Issues

make -j8 error

/home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:1724:82: error: invalid new-expression of abstract class type โ€˜FancyActivationPluginโ€™
new FancyActivationPlugin(FancyActivationPlugin::THRESHOLDED_RELU, alpha),
^
/home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:415:33: note: in definition of macro โ€˜RETURN_FIRST_OUTPUTโ€™
nvinfer1::ILayer* layer_ptr = layer;
^
/home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp: In function โ€˜onnx2trt::NodeImportResult onnx2trt::{anonymous}::importUpsample(onnx2trt::IImporterContext*, const onnx2trt_onnx::NodeProto&, std
::vectoronnx2trt::TensorOrWeights&)โ€™:
/home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:1836:67: error: invalid new-expression of abstract class type โ€˜ResizeNearestPluginโ€™
RETURN_FIRST_OUTPUT(ctx->addPlugin(new ResizeNearestPlugin(scale),
^
/home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:415:33: note: in definition of macro โ€˜RETURN_FIRST_OUTPUTโ€™
nvinfer1::ILayer* layer_ptr = layer;
^
In file included from /home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:27:0:
/home/test1050/tools/onnx-tensorrt/ResizeNearest.hpp:30:7: note: because the following virtual functions are pure within โ€˜ResizeNearestPluginโ€™:
class ResizeNearestPlugin final : public onnx2trt::Plugin {
^
In file included from /home/test1050/tools/onnx-tensorrt/NvOnnxParser.h:26:0,
from /home/test1050/tools/onnx-tensorrt/onnx2trt.hpp:25,
from /home/test1050/tools/onnx-tensorrt/builtin_op_importers.hpp:25,
from /home/test1050/tools/onnx-tensorrt/builtin_op_importers.cpp:23:
/home/test1050/tools/TensorRT-5.0.0.10/include/NvInfer.h:2666:25: note: virtual const char* nvinfer1::IPluginExt::getPluginVersion() const
virtual const char* getPluginVersion() const = 0;
^
/home/test1050/tools/TensorRT-5.0.0.10/include/NvInfer.h:2676:25: note: virtual nvinfer1::IPluginExt* nvinfer1::IPluginExt::clone() const
virtual IPluginExt* clone() const = 0;
^
CMakeFiles/nvonnxparser.dir/build.make:110: recipe for target 'CMakeFiles/nvonnxparser.dir/builtin_op_importers.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser.dir/builtin_op_importers.cpp.o] Error 1
CMakeFiles/nvonnxparser_static.dir/build.make:86: recipe for target 'CMakeFiles/nvonnxparser_static.dir/ModelImporter.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser_static.dir/ModelImporter.cpp.o] Error 1
CMakeFiles/nvonnxparser.dir/build.make:86: recipe for target 'CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o] Error 1
CMakeFiles/nvonnxparser_static.dir/build.make:110: recipe for target 'CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o] Error 1
CMakeFiles/Makefile2:68: recipe for target 'CMakeFiles/nvonnxparser_static.dir/all' failed
make[1]: *** [CMakeFiles/nvonnxparser_static.dir/all] Error 2
CMakeFiles/Makefile2:106: recipe for target 'CMakeFiles/nvonnxparser.dir/all' failed
make[1]: *** [CMakeFiles/nvonnxparser.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2

too many error,just show a few here
cmake version :3.5.1
cuda:9.0
nvcc:Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
gcc version:5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10)

onnx_backend_test.py libnvonnxparser.so.0: cannot open shared object file

Hi

After I make the project, I encountered No module named '_nv_onnx_parser_bindings' error (same as 29)
so I import sys and sys.path.append('/home/onnx-tensorrt/build/lib.linux-x86_64-3.5/onnx_tensorrt/parser/'),
and now the new error is this

  File "onnx_backend_test.py", line 31, in <module>  import onnx_tensorrt.backend as trt
  File "/home/onnx-tensorrt/onnx_tensorrt/__init__.py", line 23, in <module>  from . import backend
  File "/home/onnx-tensorrt/onnx_tensorrt/backend.py", line 22, in <module>   from . import parser
  File "/home/onnx-tensorrt/onnx_tensorrt/parser/__init__.py", line 24, in <module>  from _nv_onnx_parser_bindings import *
ImportError: libnvonnxparser.so.0: cannot open shared object file: No such file or directory


The libnvonnxparser.so.0 is in onnx-tensorrt/build but I can't get the path to work.
Any advise? Thanks!

ImportError: No module named 'onnx_tensorrt.parser._nv_onnx_parser_bindings'

After running all the installation instruction provided, onnx-tensorrt was successfully installed with python==3.5, cuda==9.2, cudnn==7.2, nvcc==9.2.

But while performing the command to test the library, the test program is returning this error:


Traceback (most recent call last):
  File "onnx_backend_test.py", line 31, in <module>
    import onnx_tensorrt.backend as trt
  File "/home/anish-fujitsu/onnx-tensorrt/onnx_tensorrt/__init__.py", line 23, in <module>
    from . import backend
  File "/home/anish-fujitsu/onnx-tensorrt/onnx_tensorrt/backend.py", line 22, in <module>
    from . import parser
  File "/home/anish-fujitsu/onnx-tensorrt/onnx_tensorrt/parser/__init__.py", line 23, in <module>
    from ._nv_onnx_parser_bindings import *
ImportError: No module named 'onnx_tensorrt.parser._nv_onnx_parser_bindings'

Any help, appreciated.

python wrapper error

Specs are: TensorRT-3.0.4
Output of step: python setup.py build

In file included from nv_onnx_parser_bindings_wrap.cpp:3867:0:
NvOnnxParser.h:26:21: fatal error: NvInfer.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

Error in Transpose while generating trt engine

If I convert a onnx model including Transpose layer to a trt plan, trt_builder.build_cuda_engine(trt_network) return NULL.
But if I discard the Transpose layer, the trt engine can be built successfully.

How can i get trough this problem๏ผŸ Thanks๏ผ

A few library requests

Hey onnx-tensorrt developers. I'm a big fan of the library, but I have a few requests that I feel will make life easier for developers using onnx-tensorrt:

  • Apply semantic versioning to the library and tag / provide release branches for releases. This will make it really easy for devs to integrate and test releases.
  • Along with minor releases document the versions of your tested dependencies.
  • Run some automated tests and linters in a CI which will make it quite easy for contributors to provide patches.

I know bandwidth is tight and people are working hard, but I think these are requirements would really help increase the adoption of onnx-tensorrt. I'd actually be willing to contribute a little to help out on the CI side if needed. Keep up the good work! Love the speedups I'm getting with TRT.

Any thoughts on support variable sized tensors?

Hi @benbarsdell, some net such as detectron has op like GenerateProposal which will generate different shapes of tensors depending on input content. Any thoughts how we can support this? In fact, does TensorRT support this kind of situation at all?

ERROR in shape Assertion

While converting a customized onnx model, I get the following error. Any ideas as to what could be the cause of this?
RuntimeError: While parsing node number 153:
onnx-tensorrt/builtin_op_importers.cpp:1300 In function importReshape:
[8] Assertion failed: get_shape_size(new_shape) == get_shape_size(tensor.getDimensions())

Enhancement and Question

For best performance on production, we normally will convert the onnx file to tensorrt engine.
Question:

  1. After onnx2trt onnxfile -o foo.trt. Do I need the onnx-tensorrt anymore even if there are some plugins provided from onnx-tensorrt? Or the only dependency is cuda library?

  2. a c++ example to run converted engine file would be greatly appreciated.
    Thanks!

Please add a layer of โ€Unsqueezeโ€

Hi.
I am using onnx-tensorrt to convert the ONNX model into a model for TensorRT. However, โ€Unsqueezeโ€ does not support it and it can not be converted. Congratulations, please add a layer of โ€Unsqueezeโ€.

Build onnx2trt problem on Jetson Xavier

Hello.
I recently purchased Jetson Xavier and I am trying to build onnx2trt.
Jetpack 4.0 has already been installed.
The environment is Ubuntu 18.04, CUDA 10, Cudnn 7.3, TensorRT 5, GNU 7.3, Cmake 3.10.2 Protobuf 3.6.1, and so on.

In JetsonTX 2 it was able to build without any problems but in Xavier, an error will occur when building.
The errors are as follows.

[39%] Building CXX object CMakeFiles / nvonnxparser_static.dir / NvOnnxParser.cpp.o
In file included from /home/nvidia/work/onnx-tensorrt/ImporterContext.hpp:25:0,
from /home/nvidia/work/onnx-tensorrt/ModelImporter.hpp:26,
from /home/nvidia/work/onnx-tensorrt/NvOnnxParser.cpp:24:
/home/nvidia/work/onnx-tensorrt/onnx2trt.hpp: 44: 14: error: 'function' in namespace 'std' does not name a template type
typedef std :: function <NodeImportResult (II mporterContext * ctx,
^ ~~~~~~~
In file included from /home/nvidia/work/onnx-tensorrt/ModelImporter.hpp:26: 0,
from /home/nvidia/work/onnx-tensorrt/NvOnnxParser.cpp:24:
/home/nvidia/work/onnx-tensorrt/ImporterContext.hpp: In member function 'virtual nvinfer1 :: IPluginLayer * onnx2trt :: ImporterContext :: addPlugin (onnx2trt :: Plugin *, const std :: vector <nvinfer1 :: ITensor * > &) ':
/home/nvidia/work/onnx-tensorrt/ImporterContext.hpp: 57: 60: error: invalid new-expression of abstract class type 'onnx2trt :: TypeSerializingPlugin'
auto * wrapped_plugin = new TypeSerializingPlugin (plugin);
^

It seems that it can not be linked successfully due to the effect of Ubuntu 18.04, GNU 7.3.

Is there a way to build in this environment?
Thanks.

Does onnx-tensorrt support TensorRT 4?

  1. I don't know whether or not to refactor onnx-tensorrt, because TensorRT 4 have supported onnx specification.

  2. I don't know whether or not to manages a registry of extension implementations for addtional operations using onnx-tensorrt(if support TensorRT 4).

thanks.

Build onnx2trt problem on Windows10

Hi.

Recently, TensorRT 5 RC for Windows has been released.
So I am trying to build onnx2trt on Windows 10.
However, code seems to be written for Linux, and I have failed to build.
Is there a way to make it build on Windows?

Thanks.

Segmentation fault (core dumped) and libnvonnxparser.so.0: undefined symbol

envs:
onnx(1.3.0) from pip
tensorrt(4.0.1.6) from source
pycuda(2018.1.1) from onnx_tensorrt source
onnx-tensorrt (0.1.0) from github master source

when i run tests or convert pytorch-onnx model to tensorrt model, it crashed.

----------------------------------------------------------------
Input filename:   ocr_api.onnx
ONNX IR version:  0.0.3
Opset version:    9
Producer name:    pytorch
Producer version: 0.4
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
Parsing model
Segmentation fault (core dumped)

when i run tests again, it said:

  File "onnx_backend_test.py", line 30, in <module>
    import onnx_tensorrt
  File "/home/wfy/anaconda3/envs/caffe2/lib/python3.5/site-packages/onnx_tensorrt-0.1.0-py3.5-linux-x86_64.egg/onnx_tensorrt/__init__.py", line 23, in <module>
    from . import backend
  File "/home/wfy/anaconda3/envs/caffe2/lib/python3.5/site-packages/onnx_tensorrt-0.1.0-py3.5-linux-x86_64.egg/onnx_tensorrt/backend.py", line 22, in <module>
    from . import parser
  File "/home/wfy/anaconda3/envs/caffe2/lib/python3.5/site-packages/onnx_tensorrt-0.1.0-py3.5-linux-x86_64.egg/onnx_tensorrt/parser/__init__.py", line 23, in <module>
    from ._nv_onnx_parser_bindings import *
ImportError: /usr/local/lib/libnvonnxparser.so.0: undefined symbol: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev

Errors trying to build TRT Onnx related samples

Hi there, on TensorRT 5.0.2GA and onnx-tensorrt installed from source I've encountered the following errors while trying to build samples related to onnx. To be more specific, while running make on sampleINT8API I get:

../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleINT8API.cpp
sampleINT8API.cpp: In member function โ€˜bool sampleINT8API::build()โ€™:
sampleINT8API.cpp:448:102: error: cannot convert โ€˜nvinfer1::INetworkDefinitionโ€™ to โ€˜nvinfer1::INetworkDefinition*โ€™ for argument โ€˜1โ€™ to โ€˜nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)โ€™
     auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger));
                                                                                                      ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleINT8API.o' failed
make: *** [../../bin/dchobj/sampleINT8API.o] Error 1

And while trying to build sampleOnnxMNIST I get:

../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
sampleOnnxMNIST.cpp: In function โ€˜void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&)โ€™:
sampleOnnxMNIST.cpp:45:63: error: cannot convert โ€˜nvinfer1::INetworkDefinitionโ€™ to โ€˜nvinfer1::INetworkDefinition*โ€™ for argument โ€˜1โ€™ to โ€˜nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)โ€™
     auto parser = nvonnxparser::createParser(*network, gLogger);
                                                               ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleOnnxMNIST.o' failed
make: *** [../../bin/dchobj/sampleOnnxMNIST.o] Error 1

I've downloaded the models and copied them to the required folder but the error persists.
Anything I might be missing out?
I'm on Ubuntu16.04 with CUDA10 and cudnn 7.4.1.5-1+cuda10.0.

Attribute not found: height_scale

The onnx model file is exported from a pytorch model by torch.onnx.export. There is an error when I then use onnx2trt to do the conversion.

``
Input filename: model.onnx
ONNX IR version: 0.0.3
Opset version: 9
Producer name: pytorch
Producer version: 0.4
Domain:
Model version: 0
Doc string:

Parsing model
terminate called after throwing an instance of 'std::out_of_range'
what(): Attribute not found: height_scale
Aborted
``

I found this is due to torch.nn.Upsample(scale_factor=4, mode='nearest').

In face, the torch's onnx exporter transform an Upsample operation like this:
``
%244 : Dynamic = onnx::Constantvalue= 1 1 4 4 [ CPUFloatType{4} ], scope: East/Upsample[unpool1]

%245 : Float(1, 512, 100, 100) = onnx::Upsample[mode="nearest"](%243, %244), scope: East/Upsample[unpool1]

return (%245);
``

I guess the height_scale and width_scale hide in onnx:Constant and there is no scale attributes in onnx:Upsample. But in your code, height_scale and width_scale are required.

I am using pytorch 0.4.1 and tensorrt 5.0.2.6.

Any suggestions to turn it out?

please list supported ONNX version

Please list supported ONNX version. Or list what's not supported. I found
onnx "Shape" op is not supported.

[8] No importer registered for op: Shape

build python wrapper error

While Build the Python wrappers and modules, I get the following error. Any ideas as to what could be the cause of this?

$ python setup.py build
running build
running build_py
running build_ext
building 'onnx_tensorrt.parser._nv_onnx_parser_bindings' extension
swigging nv_onnx_parser_bindings.i to nv_onnx_parser_bindings_wrap.cpp
swig -python -c++ -modern -builtin -o nv_onnx_parser_bindings_wrap.cpp nv_onnx_parser_bindings.i
NvOnnxParser.h:46: Error: Syntax error in input(1).
error: command 'swig' failed with exit status 1

Docker test fails

docker test fails for command

python onnx_backend_test.py OnnxBackendRealModelTest

Output :

======================================================================
ERROR: test_densenet121_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 2:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_inception_v2_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 2:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_shufflenet_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 7:
/opt/onnx2trt/builtin_op_importers.cpp:1323 In function importReshape:
[8] Assertion failed: new_shape.nbDims == 3

----------------------------------------------------------------------
Ran 18 tests in 141.325s

FAILED (errors=3, skipped=9)

Error in reshape during conversion

While converting YOLO onnx model, I get the following error. Any ideas as to what could be the cause of this?
Parsing model
While parsing node number 463 [Reshape -> "830"]:
ERROR: /home/shashank/onnx-tensorrt/builtin_op_importers.cpp:900 In function importReshape:
[8] Assertion failed: new_shape.nbDims == 3

Is onnx-tensorrt supported on jetson TX2?

Hi,I have seen that onnx-tensorrt requires tensorrt3.0+,and now jetson TX2 supports Tensorrt 4.0.
So is onnx-tensorrt supported on TX2 or did anyone ever run through onnx-tensorrt on TX2 successfully?

Thanks a lot for anyone providing helpful suggestion!

Build onnx2trt errors

When I build onnx2trt, I had some errors like:

5 errors detected in the compilation of "/tmp/tmpxft_00004c43_00000000-7_Split.cpp1.ii".
-- Removing /home/amax/Codes/onnx-tensorrt/build/CMakeFiles/nvonnxparser_plugin.dir//./nvonnxparser_plugin_generated_Split.cu.o
/usr/bin/cmake -E remove /home/amax/Codes/onnx-tensorrt/build/CMakeFiles/nvonnxparser_plugin.dir//./nvonnxparser_plugin_generated_Split.cu.o
CMake Error at nvonnxparser_plugin_generated_Split.cu.o.cmake:266 (message):
Error generating file
/home/amax/Codes/onnx-tensorrt/build/CMakeFiles/nvonnxparser_plugin.dir//./nvonnxparser_plugin_generated_Split.cu.o

CMakeFiles/nvonnxparser_plugin.dir/build.make:77: recipe for target CMakeFiles/nvonnxparser_plugin.dir/nvonnxparser_plugin_generated_Split.cu.o failed
make[2]: *** [CMakeFiles/nvonnxparser_plugin.dir/nvonnxparser_plugin_generated_Split.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs..

so what's wrong ?

problem on installation

Hi, guys. Here is the error.
[ 19%] Running C++ protocol buffer compiler on /data2/matt/workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx.proto onnx/onnx_onnx2trt_onnx.proto:401:5: Expected "required", "optional", or "repeated". onnx/onnx_onnx2trt_onnx.proto:401:17: Missing field number. onnx/onnx_onnx2trt_onnx.proto:428:3: Expected "required", "optional", or "repeated". onnx/onnx_onnx2trt_onnx.proto:428:15: Missing field number. make[2]: *** [third_party/onnx/onnx/onnx_onnx2trt_onnx.pb.cc] Error 1 make[1]: *** [third_party/onnx/CMakeFiles/gen_onnx_proto.dir/all] Error 2 make: *** [all] Error 2

The CMake generation results are listed in the following.
-- The CXX compiler identification is GNU 4.8.5
-- The C compiler identification is GNU 4.8.5
-- Check for working CXX compiler: /bin/c++
-- Check for working CXX compiler: /bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /bin/gcc
-- Check for working C compiler: /bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Found Protobuf: /usr/local/lib/libprotobuf.so;-pthread (found version "2.5.0")
-- Build type not set - defaulting to Release

-- CMake version : 3.13.0-rc3
-- CMake command : /data2/matt/cmake-3.13.0-rc3/bin/cmake
-- System : Linux
-- C++ compiler : /bin/c++
-- C++ compiler version : 4.8.5
-- CXX flags : -Wall -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ONNX_NAMESPACE=onnx2trt_onnx
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
-- ONNX version : 1.3.0
-- ONNX NAMESPACE : onnx2trt_onnx
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- Protobuf compiler : /bin/protoc
-- Protobuf includes : /usr/local/include
-- Protobuf libraries : /usr/local/lib/libprotobuf.so;-pthread
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA: /usr/local/cuda (found version "9.0")
-- Found CUDNN: /usr/local/cuda/include
-- Found TensorRT headers at /data2/matt/TensorRT-5.0.2.6/include
-- Find TensorRT libs at /data2/maqiu/TensorRT-5.0.2.6/lib/libnvinfer.so;/data2/matt/TensorRT-5.0.2.6/lib/libnvinfer_plugin.so
-- Found TENSORRT: /data2/matt/TensorRT-5.0.2.6/include
-- Configuring done
-- Generating done

How to use FP16 ot INT8?

Hi,

I was trying to use FP16 and INT8.

I understand this is how you prepare a FP32 model.

model = onnx.load("/path/to/model.onnx")
engine = backend.prepare(model, device='CUDA:1')
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)

I tried this, but it didn't work.

model = onnx.load("/path/to/model.onnx")
engine = backend.prepare(model, device='CUDA:1', dtype=np.float16)
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float16)

Any help will be greatly appreciated. Thanks!

@yinghai

Failed to compile

I was following the README to install onnx-tensorrt, when I ran make -j8, I got the following error:

/usr/include/c++/5/bits/stl_iterator_base_types.h(154): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]"
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]"
/home/xya/onnx-tensorrt/Split.cu(39): here

What can I do to fix it?

Couldn't find index page for 'tensorrt' (maybe misspelled?)

python setup.py build

  • the above step is successfully completed

  • but error is generated on the below file when installing the setup

python setup.py install

Searching for tensorrt>=3.0.0
Reading https://pypi.org/simple/tensorrt/
Couldn't find index page for 'tensorrt' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
No local packages or working download links found for tensorrt>=3.0.0
error: Could not find suitable distribution for Requirement.parse('tensorrt>=3.0.0')

we have already installed tensorrt 4.0.1 based on the nvidia's official documentation but still we face the above error.

Protobuf linker errors

I have a bunch of errors like this:

CMakeFiles/onnx2trt.dir/main.cpp.o: In function `pretty_print_onnx_to_string[abi:cxx11](google::protobuf::Message const&)':
main.cpp:(.text._Z27pretty_print_onnx_to_stringB5cxx11RKN6google8protobuf7MessageE[_Z27pretty_print_onnx_to_stringB5cxx11RKN6google8protobuf7MessageE]+0x39): undefined reference to `google::protobuf::TextFormat::PrintToString(google::protobuf::Message const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)'

Is there a protobuf version requirement? (min, max) I have 2.6 and 3.6 both available but both seem to have linker issues.

Cmake error with version 3.2.2

I have got a error msg 'Policy CMP0063 not known to this version of CMake'.
A latest version CMake can fix this problem.

Build onnx2trt error

When I attempt to make onnx2trt, I meet the error as below:
[100%] Built target onnx_proto
[100%] Built target nvonnxparser_plugin
[100%] Built target nvonnxparser_static
[100%] Built target nvonnxparser
[100%] Built target nvonnxparser_runtime
[100%] Built target nvonnxparser_runtime_static
[100%] Linking CXX executable onnx2trt
libnvonnxparser_static.a(builtin_op_importers.cpp.o): In function onnx2trt::(anonymous namespace)::importConcat(onnx2trt::IImporterContext*, onnx2trt_onnx::NodeProto const&, std::vector<onnx2trt::TensorOrWeights, std::allocator<onnx2trt::TensorOrWeights> >&)': builtin_op_importers.cpp:(.text+0x1a259): undefined reference to nvinfer1::plugin::createConcatPlugin(int, bool)'
collect2: error: ld returned 1 exit status
CMakeFiles/onnx2trt.dir/build.make:105: recipe for target 'onnx2trt' failed
make[2]: *** [onnx2trt] Error 1
CMakeFiles/Makefile2:256: recipe for target 'CMakeFiles/onnx2trt.dir/all' failed
make[1]: *** [CMakeFiles/onnx2trt.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2

I've installed tensorrt4.0. However, it seems like the definition of `nvinfer1::plugin::createConcatPlugin(int, bool)' cannot be found, which is declared in NvInferPlugin.h. Can anyone help?

feature extraction

Hi,

After converting onnx into trt model, is there any way to extract weights from layers ? I searched a lot but I could not find anything including documentation of tensorrt.

Thanks.

pytorch-onnx model can not be convertted to trt model of tensorrt

envs:
Pytorch & Caffe2 (v1.0) form conda
Onnx(1.3.0) from pip
Onnx-tensorrt(0.1.0) from source

I can convert pytorch model to .onnx and .onnx to .pb of Caffe2, all of them can run successfully.

But when i convert the same .onnx model to .trt model of tensorrt, it is wrong. Said:

----------------------------------------------------------------
Input filename:   ocr_api.onnx
ONNX IR version:  0.0.3
Opset version:    9
Producer name:    pytorch
Producer version: 0.4
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
Parsing model
While parsing node number 70 [MatMul -> "194"]:
ERROR: /home/wfy/Downloads/onnx-tensorrt/builtin_op_importers.cpp:1066 In function importMatMul:
[8] Assertion failed: dims.nbDims == 3

if I use the onnx model from your webs, it works.

Thanks!

Cannot load Resnet50 converted from onnx model zoo

Repro:

wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
tar xzf resnet50.tar.gz
onnx2trt resnet50/model.onnx -o resnet50.trt
python -c "import tensorrt as trt; eng = trt.lite.Engine(PLAN='resnet50.trt')"

It shows:
TypeError: Dimension mismatch
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.5/dist-packages/tensorrt/lite/engine.py", line 248, in init
self.output_dim = [self.engine.get_binding_dimensions(o).to_DimsCHW() for o in output_index]
File "/usr/lib/python3.5/dist-packages/tensorrt/lite/engine.py", line 248, in
self.output_dim = [self.engine.get_binding_dimensions(o).to_DimsCHW() for o in output_index]
SystemError: <built-in method to_DimsCHW of nv_infer_bindings.Dims object at 0x7f8da85c0458> returned a result with an error set

[question] int8 mode support

I looked the code, seems onnx-tensort int8 is not supported yet.
I am wondering which approach will work for the time being ?

  1. Use tensorrt 4.0.1 onnx parser, load the network in TRT network description, then
    force int8 mode in builder. Will onnx-tensorrt runtime still be used here? If so, any requirement
    in runtime to support int8?

  2. convert onnx model to caffe model. Then caffe-> tensorrt int8

} else if( model_dtype == nvinfer1::DataType::kINT8 ) {
      // TODO: Int8 support
      //trt_builder->setInt8Mode(true);
      cerr << "ERROR: Int8 mode not yet supported" << endl;
      return -5;
    }

Python wrapper import error

`>>> import onnx_tensorrt
Traceback (most recent call last):
File "", line 1, in
File "onnx_tensorrt/init.py", line 23, in
from . import backend
File "onnx_tensorrt/backend.py", line 22, in
from . import parser
File "onnx_tensorrt/parser/init.py", line 23, in
from ._nv_onnx_parser_bindings import *
ImportError: No module named _nv_onnx_parser_bindings

`
tensorrt 4.0.1.6 python 2.7 cuda 9.0 cudnn 7.1.3 ubuntu 16.04

make -j8 popping out the error

I was able to complete the first three steps:

mkdir build
cd build
cmake .. -DTENSORRT_ROOT=/opt/tensorrt

in the fourth step

make -j8

I get the following error:


Error generating file
/home/anish-fujitsu/onnx-tensorrt/build/CMakeFiles/nvonnxparser_plugin.dir//./nvonnxparser_plugin_generated_FancyActivation.cu.o

CMakeFiles/nvonnxparser_plugin.dir/build.make:886: recipe for target 'CMakeFiles/nvonnxparser_plugin.dir/nvonnxparser_plugin_generated_FancyActivation.cu.o' failed
make[2]: *** [CMakeFiles/nvonnxparser_plugin.dir/nvonnxparser_plugin_generated_FancyActivation.cu.o] Error 1
CMakeFiles/Makefile2:180: recipe for target 'CMakeFiles/nvonnxparser_plugin.dir/all' failed
make[1]: *** [CMakeFiles/nvonnxparser_plugin.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2

thoughts on onnx data type INT64 support?

I am trying to deploy onnx model using tensorrt, but cannot find tensorrt support for int64 data type.

nvinfer1::ITensor supprts the following types

kFLOAT FP32 format.
kHALF FP16 format.
kINT8 INT8 format.
kINT32 INT32 format.

In the onnx model exported from pytorch, I have several usages of int64

  1. Shape operator generate int64 type tensor per definition
  2. some constants are int64 type.

Any thoughts on how to address that? Should we modify pytorch export to avoid int64? Assume
we never use a number > 2G? Or should we add that into onnx-tensorrt ?

TensorRT doesn't accelerate

Compared with original models, the time cost using tensorrt engine is two times more. So why doesn't it accelerate the running speed? The figure below shows the MXNet model and TensorRT engine's running time per batch.

pk

Sometimes, it occurs such errors:

Cuda error in file src/implicit_gemm.cu at line 1214: invalid resource handle
[TensorRT] ERROR: customWinogradConvActLayer.cpp (308) - Cuda Error in execute: 33
[TensorRT] ERROR: customWinogradConvActLayer.cpp (308) - Cuda Error in execute: 33

It's very weird, and I don't know what happened.

Error in function importGemm during conversion

While converting resnet50 onnx model exported from MXNet, I get the following error:

----------------------------------------------------------------
Input filename:   resnet-50.onnx
ONNX IR version:  0.0.3
Opset version:    7
Producer name:    
Producer version: 
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
While parsing node number 173 [Gemm -> "fc1"]:
ERROR: /home/faldict/onnx-tensorrt/builtin_op_importers.cpp:812 In function importGemm:
[8] Assertion failed: broadcast

Did anyone encounter similar issues? Any idea to solve it?

unit test cannot pass

check out version f1b74d5
build it in docker
go to docker
root@tempfix:/workspace# python onnx_backend_test.py


Ran 540 tests in 370.619s
FAILED (failures=2, errors=46, skipped=355)

unit test cannot pass inside docker

root@trt:/workspace# python onnx_backend_test.py
s.sEs.sssssssssssssss.s.s.s.s.s.sterminate called after throwing an instance of 'std::invalid_argument'
what(): Unsupported form of asymmetric padding for AveragePool op
Aborted (core dumped)

NvOnnxParser.h:26:21: fatal error: NvInfer.h: No such file or directory compilation terminated.

wang@ian:~/Downloads/onnx-tensorrt$ python setup.py build
running build
running build_py
creating build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/onnx_tensorrt
copying onnx_tensorrt/init.py -> build/lib.linux-x86_64-2.7/onnx_tensorrt
copying onnx_tensorrt/tensorrt_engine.py -> build/lib.linux-x86_64-2.7/onnx_tensorrt
copying onnx_tensorrt/backend.py -> build/lib.linux-x86_64-2.7/onnx_tensorrt
creating build/lib.linux-x86_64-2.7/onnx_tensorrt/parser
copying onnx_tensorrt/parser/init.py -> build/lib.linux-x86_64-2.7/onnx_tensorrt/parser
creating build/lib.linux-x86_64-2.7/onnx_tensorrt/runtime
copying onnx_tensorrt/runtime/init.py -> build/lib.linux-x86_64-2.7/onnx_tensorrt/runtime
running build_ext
building 'onnx_tensorrt.parser._nv_onnx_parser_bindings' extension
swigging nv_onnx_parser_bindings.i to nv_onnx_parser_bindings_wrap.cpp
swig -python -c++ -modern -builtin -o nv_onnx_parser_bindings_wrap.cpp nv_onnx_parser_bindings.i
creating build/temp.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c nv_onnx_parser_bindings_wrap.cpp -o build/temp.linux-x86_64-2.7/nv_onnx_parser_bindings_wrap.o -std=c++11 -DUNIX -D__UNIX -m64 -fPIC -O2 -w -fmessage-length=0 -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -DNDEBUG -g -fwrapv -Wall -DSWIG
cc1plus: warning: command line option โ€˜-Wstrict-prototypesโ€™ is valid for C/ObjC but not for C++
In file included from nv_onnx_parser_bindings_wrap.cpp:3580:0:
NvOnnxParser.h:26:21: fatal error: NvInfer.h: No such file or directory
compilation terminated.

docker test fails

docker test fails for command

python onnx_backend_test.py OnnxBackendRealModelTest

Output :

root@b85b37e9dcfb:/workspace# python onnx_backend_test.py
s.sEs.sssssssssssssss.s.s.s.s.s.sEs.s.sssEsEsssssssssssss.s.s.s.s.s.sEsEs.sEs.s.sssssssEsEsEsEsEs.s.sss.s.s.sssss.s.sss.sssss.s.s.sUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
Es.s.s.s.sssssssssssssssssssss.s.s.s.s.s.s.sssss.s.sss.s.s.s.s.sssssssEsEsEs.s.s.s.s.s.s.s.s.sFsFs.sssEsEsEs.s.s.sEs.s.s.s.sssssssssssssssssssssssssssss.sEs.s.s.s.sEsEsEsEsss.sUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
Es.s.s.sEsEs.s.sssssEsEsssssssssssss.s.s.s.s.s.s.s.s.sssssssEs.sEs.s.sEsEs.s.s.s.s.s.s.s.s.s.sssssEs.s.sEsEsEsEsEsEs.sssssssssssssssssssssssEsEs.s.sssssss.s.s.sssssEsEsEsEsEsEsEsEsEs.s.s.s.s.s.s.s.s.s.s.sssssssssssssss.s.s.sUnsupported ONNX data type: INT64 (7)
EsUnsupported ONNX data type: INT64 (7)
Es.s.s.s.s.s.s.s.s.s.sssssssssssssssssssEsEs.sEsss.s.s.s.s.s.s.s.s.s.s.s.s.s.s.sUnsupported ONNX data type: DOUBLE (11)
EsUnsupported ONNX data type: DOUBLE (11)
EsUnsupported ONNX data type: DOUBLE (11)
EsUnsupported ONNX data type: DOUBLE (11)
EsUnsupported ONNX data type: DOUBLE (11)
EsEs.sss.s.s.s.s.s.sEsss.s.s.sEsUnsupported ONNX data type: INT64 (7)
EsEsEsEs.sEsEsEsEsssssssss.s.s.s.s.s.s.s.sEs.sEs.sEs.s[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574674712
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574674712
.s.s.
======================================================================
ERROR: test_add_bcast_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 266, in run
    outputs = list(prepared_model.run(inputs))
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 109, in run
    outputs = self.engine.run(inputs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/tensorrt_engine.py", line 106, in run
    raise ValueError("All inputs must have same batch size")
ValueError: All inputs must have same batch size

======================================================================
ERROR: test_averagepool_2d_same_lower_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:385 In function importAveragePool:
[8] Assertion failed: supported_form_of_asymmetric_padding_for_AveragePool

======================================================================
ERROR: test_basic_conv_with_padding_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_basic_conv_without_padding_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_concat_1d_axis_0_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:487 In function importConcat:
[8] Assertion failed: axis != BATCH_DIM

======================================================================
ERROR: test_concat_2d_axis_0_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:487 In function importConcat:
[8] Assertion failed: axis != BATCH_DIM

======================================================================
ERROR: test_concat_3d_axis_0_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:487 In function importConcat:
[8] Assertion failed: axis != BATCH_DIM

======================================================================
ERROR: test_conv_with_strides_no_padding_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_conv_with_strides_padding_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_depthtospace_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: DepthToSpace

======================================================================
ERROR: test_depthtospace_example_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: DepthToSpace

======================================================================
ERROR: test_div_bcast_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 266, in run
    outputs = list(prepared_model.run(inputs))
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 109, in run
    outputs = self.engine.run(inputs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/tensorrt_engine.py", line 106, in run
    raise ValueError("All inputs must have same batch size")
ValueError: All inputs must have same batch size

======================================================================
ERROR: test_gather_0_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_gather_1_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_matmul_2d_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:936 In function importMatMul:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_matmul_3d_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:936 In function importMatMul:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_matmul_4d_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:936 In function importMatMul:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_mean_example_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Mean

======================================================================
ERROR: test_mean_one_input_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Mean

======================================================================
ERROR: test_mean_two_inputs_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Mean

======================================================================
ERROR: test_mul_bcast_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 266, in run
    outputs = list(prepared_model.run(inputs))
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 109, in run
    outputs = self.engine.run(inputs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/tensorrt_engine.py", line 106, in run
    raise ValueError("All inputs must have same batch size")
ValueError: All inputs must have same batch size

======================================================================
ERROR: test_pow_bcast_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:56 In function importInput:
[8] Assertion failed: onnx_tensor_type.shape().dim().size() > 0

======================================================================
ERROR: test_reduce_log_sum_asc_axes_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceLogSum

======================================================================
ERROR: test_reduce_log_sum_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceLogSum

======================================================================
ERROR: test_reduce_log_sum_default_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceLogSum

======================================================================
ERROR: test_reduce_log_sum_desc_axes_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceLogSum

======================================================================
ERROR: test_reshape_extended_dims_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_reshape_negative_dim_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_reshape_one_dim_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_reshape_reduced_dims_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_reshape_reordered_dims_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_shape_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:310 In function importModel:
[7] Assertion failed: tensors.at(output.name()).is_tensor()

======================================================================
ERROR: test_shape_example_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:310 In function importModel:
[7] Assertion failed: tensors.at(output.name()).is_tensor()

======================================================================
ERROR: test_size_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:310 In function importModel:
[7] Assertion failed: tensors.at(output.name()).is_tensor()

======================================================================
ERROR: test_size_example_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:310 In function importModel:
[7] Assertion failed: tensors.at(output.name()).is_tensor()

======================================================================
ERROR: test_split_variable_parts_1d_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1522 In function importSplit:
[8] Assertion failed: axis != BATCH_DIM

======================================================================
ERROR: test_split_variable_parts_default_axis_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1522 In function importSplit:
[8] Assertion failed: axis != BATCH_DIM

======================================================================
ERROR: test_squeeze_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Squeeze

======================================================================
ERROR: test_sub_bcast_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 266, in run
    outputs = list(prepared_model.run(inputs))
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 109, in run
    outputs = self.engine.run(inputs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/tensorrt_engine.py", line 106, in run
    raise ValueError("All inputs must have same batch size")
ValueError: All inputs must have same batch size

======================================================================
ERROR: test_top_k_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: TopK

======================================================================
ERROR: test_transpose_all_permutations_2_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1660 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM

======================================================================
ERROR: test_transpose_all_permutations_3_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1660 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM

======================================================================
ERROR: test_transpose_all_permutations_4_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1660 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM

======================================================================
ERROR: test_transpose_all_permutations_5_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1660 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM

======================================================================
ERROR: test_transpose_default_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1660 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM

======================================================================
ERROR: test_unsqueeze_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_AvgPool1d_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_AvgPool1d_stride_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_ConstantPad2d_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1116 In function importPad:
[8] Assertion failed: mode == "constant" && value == 0

======================================================================
ERROR: test_Conv1d_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_dilated_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_groups_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_pad1_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_pad1size1_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_pad2_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_pad2size1_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Conv1d_stride_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:536 In function importConv:
[8] Assertion failed: kernel_weights.shape.nbDims == 4

======================================================================
ERROR: test_Embedding_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_Embedding_sparse_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_PixelShuffle_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 1:
/opt/onnx2trt/builtin_op_importers.cpp:1323 In function importReshape:
[8] Assertion failed: new_shape.nbDims == 3

======================================================================
ERROR: test_PoissonNLLLLoss_no_reduce_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 2:
/opt/onnx2trt/builtin_op_importers.cpp:1086 In function importMul:
[8] Assertion failed: get_shape_size(scale_weights.shape) == get_shape_size(dims)

======================================================================
ERROR: test_ReflectionPad2d_cuda (__main__.OnnxBackendPyTorchConvertedModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1116 In function importPad:
[8] Assertion failed: mode == "constant" && value == 0

======================================================================
ERROR: test_operator_add_broadcast_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_add_size1_broadcast_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_add_size1_right_broadcast_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_add_size1_singleton_broadcast_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_addconstant_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_addmm_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:777 In function importGemm:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_operator_index_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Slice

======================================================================
ERROR: test_operator_mm_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 1:
/opt/onnx2trt/builtin_op_importers.cpp:777 In function importGemm:
[8] Assertion failed: inputs.at(1).is_weights()

======================================================================
ERROR: test_operator_non_float_params_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

======================================================================
ERROR: test_operator_pad_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:1116 In function importPad:
[8] Assertion failed: mode == "constant" && value == 0

======================================================================
ERROR: test_operator_params_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/builtin_op_importers.cpp:328 In function importAdd:
[8] Assertion failed: get_shape_size(shift_weights.shape) == get_shape_size(dims)

======================================================================
ERROR: test_operator_permute2_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number -1:
/opt/onnx2trt/ModelImporter.cpp:78 In function importInput:
[8] Assertion failed: trt_dims.nbDims <= 3

======================================================================
ERROR: test_operator_reduced_mean_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceMean

======================================================================
ERROR: test_operator_reduced_mean_keepdim_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceMean

======================================================================
ERROR: test_operator_reduced_sum_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceSum

======================================================================
ERROR: test_operator_reduced_sum_keepdim_cuda (__main__.OnnxBackendPyTorchOperatorModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 0:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: ReduceSum

======================================================================
ERROR: test_densenet121_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 2:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_inception_v2_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 2:
/opt/onnx2trt/ModelImporter.cpp:141 In function importNode:
[8] No importer registered for op: Unsqueeze

======================================================================
ERROR: test_shufflenet_cuda (__main__.OnnxBackendRealModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 237, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 74, in __init__
    raise RuntimeError(msg)
RuntimeError: While parsing node number 7:
/opt/onnx2trt/builtin_op_importers.cpp:1323 In function importReshape:
[8] Assertion failed: new_shape.nbDims == 3

======================================================================
FAIL: test_maxpool_2d_same_lower_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 267, in run
    self._assert_similar_outputs(ref_outputs, outputs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 154, in _assert_similar_outputs
    atol=1e-7)
  File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Not equal to tolerance rtol=0.001, atol=1e-07

(mismatch 1.13932291667%)
 x: array([ 1.764052,  1.764052,  0.978738, ...,  1.178189, -0.941546,
        1.661652], dtype=float32)
 y: array([ 1.764052,  1.764052,  0.978738, ...,  1.178189, -0.941546,
        1.661652], dtype=float32)

======================================================================
FAIL: test_maxpool_2d_same_upper_cuda (__main__.OnnxBackendNodeModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 211, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 267, in run
    self._assert_similar_outputs(ref_outputs, outputs)
  File "/usr/local/lib/python2.7/dist-packages/onnx-1.1.1-py2.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 154, in _assert_similar_outputs
    atol=1e-7)
  File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Not equal to tolerance rtol=0.001, atol=1e-07

(mismatch 1.20442708333%)
 x: array([ 1.764052,  0.978738,  2.240893, ..., -0.941546,  0.254716,
        0.254716], dtype=float32)
 y: array([1.764052, 0.978738, 2.240893, ..., 0.      , 0.254716, 0.254716],
      dtype=float32)

----------------------------------------------------------------------
Ran 722 tests in 47.528s

FAILED (failures=2, errors=81, skipped=468)

docker build fails

I get this error while building the library...

CMake Error at CMakeLists.txt:97 (add_subdirectory): The source directory /opt/onnx2trt/third_party/onnx does not contain a CMakeLists.txt file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.