paddlepaddle / paddle2onnx Goto Github PK
View Code? Open in Web Editor NEWONNX Model Exporter for PaddlePaddle
License: Apache License 2.0
ONNX Model Exporter for PaddlePaddle
License: Apache License 2.0
将stnet训练的模型,想转onnx,但是却报错了
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'conv3d_0.tmp_0' of node:
input: "conv3d_0.tmp_0" input: "conv3d_0.tmp_1@reshape_y" output: "conv3d_0.tmp_1" op_type: "Add"
is not output of any previous nodes.
环境:ubuntu 16.04
paddle版本: paddlepaddle-gpu 1.6.1.post107
onnx版本: onnx 1.5.0
microsoft/onnxruntime#1541
pybind/pybind11#1262
onnx/onnx#2339
Update pybind to the latest version can solve this.
load the model parameter done.
The operator sets to run test case.
{'nearest_interp', 'yolo_box', 'shape', 'batch_norm', 'multiclass_nms', 'conv2d', 'transpose2', 'cast', 'concat', 'fill_constant', 'leaky_relu', 'slice', 'elementwise_mul', 'scale', 'elementwise_add'}
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/bin/paddle2onnx", line 10, in
sys.exit(main())
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/fluid_onnx/fluid_to_onnx.py", line 230, in main
convert(args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/fluid_onnx/fluid_to_onnx.py", line 194, in convert
checker.check_model(onnx_model)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/onnx/checker.py", line 91, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Node () has input size 2 not in range [min=3, max=4].
==> Context: Bad node spec: input: "leaky_relu_58.tmp_0" input: "nearest_interp_0.tmp_0@scales" output: "nearest_interp_0.tmp_0" op_type: "Resize" attribute { name: "mode" s: "nearest" type: STRING }
The goal would be to get identical inference results to Paddle models and inference. @kuke has more thoughts. And here is a notebook on the topic: https://github.com/onnx/tutorials/blob/master/tutorials/CorrectnessVerificationAndPerformanceComparison.ipynb.
Q: what runtime would we use for ONNX: TensorRT?
这个是paddlepaddle的stargan的预训练模型:https://paddle-gan-models.bj.bcebos.com/stargan_G.tar.gz
怎样转成onnx模型?
paddle2onnx --fluid_model_name .DS_Store --fluid_model stargan --onnx_model .
用以上命令行老是显示没有错误:FileNotFoundError: [Errno 2] No such file or directory: 'stargan\model'
这个预训练模型名字是什么?
We ran into the following issues and would like to get direction from paddle-onnx dev.
1. assertion error : dims.nbDims == 3
I can convert paddle paddle models to onnx using the fluid_to_onnx.py script, e.g.
(venv) root@c33e4b787188:/paddle-onnx# python fluid_to_onnx.py --fluid_model extras/fit_a_line.inference.model --onnx_model extras/fit_a_line.onnx.inference.model --to_print_model > extras/fit_a_line_onnx.inference.model.out/paddle-onnx# cd extras
(venv) root@c33e4b787188:
(venv) root@c33e4b787188:~/paddle-onnx/extras# ls
fit_a_line.inference.model fit_a_line.onnx.inference.model fit_a_line_onnx.inference.model.out
but when I try to validate the resulting onnx model using the validate.py script I get the following error when using the tensorrt backend:
Inference results for fluid model:
[array([[15.874767],
[17.174097],
[14.951813],
[14.069194],
[13.003316],
[17.782452],
[16.204231],
[13.260891],
[15.537827],
[13.720056]], dtype=float32)]
Traceback (most recent call last):
File "validate.py", line 129, in
validate(args)
File "validate.py", line 113, in validate
rep = backend.prepare(onnx_model, device='CUDA:0')
File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 166, in prepare
File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 71, in init
RuntimeError: While parsing node number 1:
/root/onnx-tensorrt/builtin_op_importers.cpp:539 In function importFlatten:
[8] Assertion failed: dims.nbDims == 3
2. Issue 2 : dim_value 0
We see some dim_value of 0 in the generated model. Is this expected and how those should be interpreted.
When I look at the human readable onnx model, i.e. fit_a_line_onnx.inference.model.out that was generated by the fluid_to_onnx.py script there are nodes with dim_value of 0:
(venv) root@c33e4b787188:~/paddle-onnx/extras# cat fit_a_line_onnx.inference.model.out | more
The converted model is:
ir_version: 3
producer_name: "PaddlePaddle"
graph {
node {
input: "x"
output: "x@flatten_0"
op_type: "Flatten"
attribute {
name: "axis"
i: 1
type: INT
}
}
node {
input: "fc_0.w_0"
output: "fc_0.w_0@flatten_0"
op_type: "Flatten"
attribute {
name: "axis"
i: 1
type: INT
}
}
node {
input: "x@flatten_0"
input: "fc_0.w_0@flatten_0"
output: "fc_0.tmp_0@matmul_0"
op_type: "MatMul"
}
node {
output: "fc_0.tmp_0@shape_0"
op_type: "Constant"
attribute {
name: "value"
t {
dims: 2
data_type: INT64
int64_data: 0
int64_data: 1
name: fc_0.tmp_0@shape_0
}
type: TENSOR
}
}
node {
input: "fc_0.tmp_0@matmul_0"
input: "fc_0.tmp_0@shape_0"
output: "fc_0.tmp_0"
op_type: "Reshape"
}
node {
input: "fc_0.tmp_0"
input: "fc_0.b_0"
output: "fc_0.tmp_1"
op_type: "Add"
attribute {
name: "axis"
i: 1
type: INT
}
attribute {
name: "broadcast"
i: 1
type: INT
}
}
name: "fit_a_line"
initializer {
dims: 1
data_type: FLOAT
float_data: 19.3055801392
name: "fc_0.b_0"
}
initializer {
dims: 13
dims: 1
data_type: FLOAT
float_data: -0.235716566443
float_data: 1.50793659687
float_data: -1.37839913368
float_data: 0.587660908699
float_data: -1.62691628933
float_data: 1.94002008438
float_data: -1.5584435463
float_data: 1.01809895039
float_data: -2.47688126564
float_data: -2.48663592339
float_data: -2.72155380249
float_data: 1.01887917519
float_data: -2.73560881615
name: "fc_0.w_0"
}
input {
name: "x"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 0
}
dim {
dim_value: 13
}
}
}
}
}
input {
name: "fc_0.b_0"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 1
}
}
}
}
}
input {
name: "fc_0.w_0"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 13
}
dim {
dim_value: 1
}
}
}
}
}
output {
name: "fc_0.tmp_1"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 0
}
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 7
}
Saved converted model to path: extras/fit_a_line.onnx.inference.model
I get the same results after converting the recognize_digits_mlp.inference.model from the paddle paddle repo’s /path-to-repo/paddle-paddle/python/paddle/fluid/tests/book directory…
Please add the following import
at the head of each python file.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
Travis-ci always failed
I want to convert pretrained nlp model from PaddlePaddle to tensorflow ?
like bert or elmo (https://github.com/PaddlePaddle/LARK/tree/develop/BERT).
Is that have been supported and tested?
Currently, we simply use a Constant
operator to handle persistable variables. I think instead we should use
initializes
field of onnx graph proto. Since in fluid, we have constant operators, please consider this case:
cell_init = fluid.layers.fill_constant_batch_size_like(
input=decoder_boot,
value=0.0,
shape=[-1, decoder_size],
dtype='float32')
cell_init.stop_gradient = False
with rnn.block():
current_word = rnn.step_input(target_embedding)
encoder_vec = rnn.static_input(encoder_vec)
encoder_proj = rnn.static_input(encoder_proj)
cell_init
should be the constant op which is not trainable.
I execute sh setup.sh
and all python dependencies are installed successfully. However, I can't run convert.py
and the error message is:
Traceback (most recent call last):
File "convert.py", line 18, in <module>
from onnx import helper, checker
File "/usr/local/lib/python2.7/dist-packages/onnx/__init__.py", line 10, in <module>
import onnx.helper # noqa
File "/usr/local/lib/python2.7/dist-packages/onnx/helper.py", line 15, in <module>
import onnx.defs as defs
File "/usr/local/lib/python2.7/dist-packages/onnx/defs/__init__.py", line 6, in <module>
import onnx.onnx_cpp2py_export.defs as C
ImportError: /usr/local/lib/python2.7/dist-packages/onnx/onnx_cpp2py_export.so: undefined symbol: _ZNK6google8protobuf7Message13SpaceUsedLongEv
使用paddleDetection的ssd_mobilnet_v1训练获得的模型,使用此工具转换为onnx模型后,使用onnxruntime调用onnx模型检测不到正确的框,概率分数也较低,与真实相差很远。
Many unit tests seem to be broken for me on the develop
branch.
In particular, the hard-coding of output var type is problematic:
def append_input_output(self, block, op_proto, np_list, persistable_list, is_input):
...
def create_var(block, name, np_list, var_proto):
...
return block.create_var(
dtype='float32',
shape=shape,
persistable=persistable,
lod_level=lod_level,
name=name)
However, making the type from np_val
will fix some tests but break many others.
Python:Python 3.7
转换框架版本:PaddlePaddle 1.5.1
paddledetecton0.1: darnet yolov3 训练的模型
paddle2onnx: 0.2
*训练参数和模型:*https://github.com/PaddlePaddle/PaddleDetection/blob/release/0.1/configs/yolov3_darknet_voc.yml
使用转换命令:paddle2onnx --fluid_model mj_yolov3_darknet/ --fluid_model_name mj_yolov3_darknet/model --fluid_params_name mj_yolov3_darknet/params --onnx_model mjy3
得到的结果,出错信息
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
----------- Configuration Arguments -----------
check_task: image_classification
debug: False
fluid_model: mj_yolov3_darknet/
fluid_model_name: mj_yolov3_darknet/model
fluid_params_name: mj_yolov3_darknet/params
image_path:
name_prefix:
onnx_model: mjy3
return_variable: False
to_print_model: False
load the model parameter done.
The operator sets to run test case.
{'concat', 'multiclass_nms', 'scale', 'conv2d', 'leaky_relu', 'batch_norm', 'yolo_box', 'nearest_interp', 'elementwise_add', 'transpose2'}
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/bin/paddle2onnx", line 10, in
sys.exit(main())
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/fluid_onnx/fluid_to_onnx.py", line 230, in main
convert(args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/fluid_onnx/fluid_to_onnx.py", line 194, in convert
checker.check_model(onnx_model)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/onnx/checker.py", line 91, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Node () has input size 0 not in range [min=1, max=1].
==> Context: Bad node spec: output: "nearest_interp_0.tmp_0@out_size_f" op_type: "Cast" attribute { name: "to" i: 1 type: INT }
@cjt222
----------- Configuration Arguments -----------
check_task: image_classification
debug: False
fluid_model: ../mask_detector/
fluid_model_name: __model__
fluid_params_name: __param__
image_path:
name_prefix:
onnx_model: ./model.onnx
return_variable: False
to_print_model: False
------------------------------------------------
load the model parameter done.
Traceback (most recent call last):
File "fluid_to_onnx.py", line 234, in <module>
main()
File "fluid_to_onnx.py", line 230, in main
convert(args)
File "fluid_to_onnx.py", line 141, in convert
block=block)
File "/home/tu/anaconda3/envs/occlusion_face_paddle/lib/python3.7/site-packages/fluid_onnx/ops.py", line 192, in conv2d_op
kernel_shape = block.vars[get_old_name(inputs['Filter'][0])].shape
KeyError: ''
根据requirements.txt尝试了几种环境组合,依然报错,请问拿release的1.0版本来说,下面的环境是哪个版本啊
protobuf==?
onnx==?
paddlepaddle==?
Using pip install paddlepaddle
, I get the paddlepaddle (0.11.0).
After that I tried python convert.py --modeldir inception_v1/
, I get the error: import paddle.fluid as fluid ImportError: No module named fluid
.
Then I tried to change import paddle.v2.fluid as fluid
, it works.
Rerunning the converter and get the following error:
File "/home/haifeng/Synopsys/paddle-onnx/convert.py", line 21, in <module>
import fluid_onnx.ops as ops
File "/home/haifeng/Synopsys/paddle-onnx/fluid_onnx/ops.py", line 17, in <module>
from paddle.fluid.executor import fetch_var
ImportError: No module named fluid.executor
It seems the converter use different paddlepaddle version.
Could you look into it?
Thanks
现象:一个最基本的乘法,转换为onnx文件后,上线报错。错误信息:
环境:paddle环境是1.5 ,paddle2onnx 是0.2
paddle部分的代码:
import paddle.fluid.layers as layers
dataX = fluid.layers.data(name="dataX", append_batch_size = False, shape=[2, 5],dtype="float32")
w = layers.create_parameter(shape=[5, 3], dtype='float32', is_bias=False)
output = fluid.layers.mul(dataX, w,
x_num_col_dims = 1,
y_num_col_dims = 1)
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
infer_path = os.path.join('./', 'ckp-infer2')
if not os.path.isdir(infer_path):
os.makedirs(infer_path)
fluid.io.save_inference_model(main_program=fluid.default_main_program(),
dirname=infer_path,
feeded_var_names=[dataX.name],
target_vars=[output],
executor=exe)
然后转换为onnx的命令:
paddle2onnx --fluid_model ckp-infer2/ --onnx_model paddle2onnx_test
We are going to support the conversion of models as follows at first stage. They all can be found in our models bank, some maybe only need to be verified after #19 merged, and some lack of necessary operators.
Even all the models above are supported, only a small subset of operators are used and verified. So for the rest operators, we may need another task:
Hints:
paddle1.3.0隐藏了fetch_var接口,无法使用paddle-onnx。。。
Two models included: recognize_digits_conv
and recognize_digits_mlp
Instead of doing an RNN model initially, we instead focus on image recognition models with ResNet or VGG architectures
Hello, I met an issue on conversion from fluid to ONNX model on the example. Anyone can help?
Traceback (most recent call last):
File "fluid_to_onnx.py", line 143, in
convert(args)
File "fluid_to_onnx.py", line 67, in convert
var=var, scope=inference_scope)
File "/home/test/paddle-onnx/fluid_onnx/variables.py", line 39, in paddle_onnx_weight
data = _fetch_var(var.name, scope)
File "/home/test/.local/lib/python2.7/site-packages/paddle/fluid/executor.py", line 191, in _fetch_var
assert isinstance(name, str)
AssertionError
ONNX version: 1.2.2+ (pip install on centos 7.2)
Paddle: github Aug 28th
recognize_digits: Conv
, Add
, Relu
, MaxPool
, BatchNormalization
, Reshape
, MatMul
, Softmax
, Tanh
ResNet: Conv
, Add
, Relu
, AveragePool
, Reshape
, MatMul
, Softmax
VGG: Conv
, Add
, Relu
, Dropout
, Constant
, Mul
, MaxPool
, Reshape
, MatMul
, BatchNormalization
, Softmax
MobileNet: Conv
, BatchNormalization
, Relu
, GlobalAveragePool
, Reshape
, MatMul
, Add
, Softmax
SE_ResNeXt: Conv
, BatchNormalization
, Relu
, MaxPool
, GlobalAveragePool
, Reshape
, MatMul
, Add
, Sigmoid
, Mul
, Dropout
, Constant
, Softmax
Inception V4: Conv
, BatchNormalization
, MaxPool
, Concat
, AveragePool
, Dropout
, Constant
, Mul
, Reshape
, MatMul
, Add
, Softmax
ONNX == 1.0.1
, Data layout == NCHW
New feature or hot-fix should be pushed to develop
branch first. The master
branch should maintain the stable version. We can polish the conversion for fit_a_line
model and when the model is well supported we can ship it to master
branch. Please vote for this proposal.
We need to figure out whether Paddle params will be read:
Additionally, we need to get on the same page with populating the graph with parameters
Paddle-ONNX generated model set dim_value=0 as placeholder for parameters to be set by the user, such as batch size. It seems there is no facility in ONNX to store undefined shape value, such as 0.
The undefined value broke ngraph-onnx importer and this thread is to open a discuss to seek a solution.
There is patch developed by @arogowie-intel which can be submitted to review. That is one the option.
See more info here: #52
Do you plan to support the conversion for DeepSpeech?
Traceback (most recent call last):
File "/ssd1/share/python36/bin/paddle2onnx", line 11, in
load_entry_point('paddle2onnx==0.1', 'console_scripts', 'paddle2onnx')()
File "/ssd1/share/python36/lib/python3.6/site-packages/paddle2onnx-0.1-py3.6.egg/fluid_onnx/fluid_to_onnx.py", line 230, in main
File "/ssd1/share/python36/lib/python3.6/site-packages/paddle2onnx-0.1-py3.6.egg/fluid_onnx/fluid_to_onnx.py", line 99, in convert
File "/ssd1/share/python36/lib/python3.6/site-packages/paddle/fluid/io.py", line 1199, in load_inference_model
with open(model_filename, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/work/******/params/model'
运行代码报错 __model__文件是什么?我再paddle里下的ernie模型里也没有这个文件呀
你好,Paddle2onnx支持DeepASRmodel吗?或者有没有什么方法可以进行这样的转换?
Pull request at #105
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.