Coder Social home page Coder Social logo

paddlepaddle / x2paddle Goto Github PK

View Code? Open in Web Editor NEW
714.0 25.0 159.0 22.55 MB

Deep learning model converter for PaddlePaddle. (『飞桨』深度学习模型转换工具)

Home Page: http://www.paddlepaddle.org/

License: Apache License 2.0

Python 99.99% Shell 0.01%
paddlepaddle tensorflow caffe model-converter onnx pytorch x2paddle-model-zoo

x2paddle's Introduction

X2Paddle

PyPI - X2Paddle Version PyPI Status License Version python version

简介

X2Paddle是飞桨生态下的模型转换工具,致力于帮助其它深度学习框架用户快速迁移至飞桨框架。目前支持推理模型的框架转换PyTorch训练代码迁移,我们还提供了详细的不同框架间API对比文档,降低开发者将模型迁移到飞桨的时间成本。

特性

  • 支持主流深度学习框架

    • 目前已经支持Caffe/TensorFlow/ONNX/PyTorch四大框架的预测模型的转换,PyTorch训练项目的转换,涵盖了目前市面主流深度学习框架
  • 支持的模型丰富

    • 在主流的CV和NLP模型上支持大部分模型转换,目前X2Paddle支持130+ PyTorch OP,90+ ONNX OP,90+ TensorFlow OP 以及 30+ Caffe OP,详见 支持列表
  • 简洁易用

    • 一条命令行或者一个API即可完成模型转换

能力

  • 预测模型转换

    • 支持Caffe/TensorFlow/ONNX/PyTorch的模型一键转为飞桨的预测模型,并使用PaddleInference/PaddleLite进行CPU/GPU/Arm等设备的部署
  • PyTorch训练项目转换

    • 支持PyTorch项目Python代码(包括训练、预测)一键转为基于飞桨框架的项目代码,帮助开发者快速迁移项目,并可享受AIStudio平台对于飞桨框架提供的海量免费计算资源【新功能,试一下!】
  • API对应文档

    • 详细的API文档对比分析,帮助开发者快速从PyTorch框架的使用迁移至飞桨框架的使用,大大降低学习成本 【新内容,了解一下!】

安装

环境依赖

  • python >= 3.5
  • paddlepaddle >= 2.2.2
  • tensorflow == 1.14 (如需转换TensorFlow模型)
  • onnx >= 1.6.0 (如需转换ONNX模型)
  • torch >= 1.5.0 (如需转换PyTorch模型)
  • paddlelite >= 2.9.0 (如需一键转换成Paddle-Lite支持格式,推荐最新版本)

pip安装(推荐)

如需使用稳定版本,可通过pip方式安装X2Paddle:

pip install x2paddle

源码安装

如需体验最新功能,可使用源码安装方式:

git clone https://github.com/PaddlePaddle/X2Paddle.git
cd X2Paddle
git checkout develop
python setup.py install

快速开始

功能一:推理模型转换

PyTorch模型转换

from x2paddle.convert import pytorch2paddle
pytorch2paddle(module=torch_module,
               save_dir="./pd_model",
               jit_type="trace",
               input_examples=[torch_input])
# module (torch.nn.Module): PyTorch的Module。
# save_dir (str): 转换后模型的保存路径。
# jit_type (str): 转换方式。默认为"trace"。
# input_examples (list[torch.tensor]): torch.nn.Module的输入示例,list的长度必须与输入的长度一致。默认为None。

script模式以及更多细节可参考PyTorch模型转换文档

TensorFlow模型转换

x2paddle --framework=tensorflow --model=tf_model.pb --save_dir=pd_model

ONNX模型转换

x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model

Caffe模型转换

x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel --save_dir=pd_model

转换参数说明

参数 作用
--framework 源模型类型 (tensorflow、caffe、onnx)
--prototxt 当framework为caffe时,该参数指定caffe模型的proto文件路径
--weight 当framework为caffe时,该参数指定caffe模型的参数文件路径
--save_dir 指定转换后的模型保存目录路径
--model 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径
--input_shape_dict [可选] For ONNX, 定义ONNX模型输入大小
--caffe_proto [可选] 由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None
--define_input_shape [可选] For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见文档Q2
--enable_code_optim [可选] For PyTorch, 是否对生成代码进行优化,默认为False
--to_lite [可选] 是否使用opt工具转成Paddle-Lite支持格式,默认为False
--lite_valid_places [可选] 指定转换类型,可以同时指定多个backend(以逗号分隔),opt将会自动选择最佳方式,默认为arm
--lite_model_type [可选] 指定模型转化类型,目前支持两种类型:protobuf和naive_buffer,默认为naive_buffer
--disable_feedback [可选] 是否关闭X2Paddle使用反馈;X2Paddle默认会统计用户在进行模型转换时的成功率,以及转换框架来源等信息,以便于帮忙X2Paddle根据用户需求进行迭代,不会上传用户的模型文件。如若不想参与反馈,可指定此参数为False即可

X2Paddle API

目前X2Paddle提供API方式转换模型,可参考X2PaddleAPI

一键转换Paddle-Lite支持格式

可参考使用X2paddle导出Padde-Lite支持格式

功能二:PyTorch模型训练迁移

项目转换包括3个步骤

  1. 项目代码预处理
  2. 代码/预训练模型一键转换
  3. 转换后代码后处理

详见PyTorch训练项目转换文档

使用VisualDL进行模型转换

飞桨可视化工具VisualDL已经将模型转换工具部署在官网提供服务,可以点击服务链接进行在线的ONNX2Paddle模型转换。

ONNX2Paddle

使用教程

  1. TensorFlow预测模型转换教程
  2. MMDetection模型转换指南
  3. PyTorch预测模型转换教程
  4. PyTorch训练项目转换教程

🤗贡献代码🤗

我们非常欢迎您为X2Paddle贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交Pull Requests,如果有PyTorch训练项目转换需求欢迎随时提issue~

x2paddle's People

Contributors

andpuqing avatar bbuf avatar cdyangzhenyu avatar channingss avatar dawn1206 avatar driftcloudy avatar firedent avatar geoyee avatar jiangjiajun avatar kasyoukin avatar littletomatodonkey avatar lutaochu avatar macrobull avatar mamingjie-china avatar qili93 avatar qqj1130247885 avatar rainyfly avatar renwb-lab avatar rollroll90 avatar sonixixi avatar sunahong1993 avatar tingggggg avatar wjj19950828 avatar wyxogo avatar yeliang2258 avatar yma-admin avatar zeyuchen avatar zhoucz97 avatar zhoukunsheng avatar zoruasama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

x2paddle's Issues

模型的双向GRU怎么转比较好

复现的pytorch代码:
import torch
import torch.nn as nn
rnn = nn.GRU(10,20, bidirectional=True)
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
output, hn = rnn(input,h0)
torch.onnx.export(rnn, input, 'test.onnx', verbose=True)

转onnx出的结果:
graph(%input : Float(5, 3, 10),
%weight_ih_l0 : Float(60, 10),
%weight_hh_l0 : Float(60, 20),
%bias_ih_l0 : Float(60),
%bias_hh_l0 : Float(60),
%weight_ih_l0_reverse : Float(60, 10),
%weight_hh_l0_reverse : Float(60, 20),
%bias_ih_l0_reverse : Float(60),
%bias_hh_l0_reverse : Float(60)):
%9 : Long() = onnx::Constantvalue={1}, scope: GRU
%10 : Tensor = onnx::Shape(%input), scope: GRU
%11 : Long() = onnx::Gather[axis=0](%10, %9), scope: GRU
%12 : Long() = onnx::Constantvalue={2}, scope: GRU
%13 : Long() = onnx::Constantvalue={20}, scope: GRU
%14 : Tensor = onnx::Unsqueezeaxes=[0]
%15 : Tensor = onnx::Unsqueezeaxes=[0]
%16 : Tensor = onnx::Unsqueezeaxes=[0]
%17 : Tensor = onnx::Concat[axis=0](%14, %15, %16)
%18 : Float(2, 3, 20) = onnx::ConstantOfShapevalue={0}, scope: GRU
%19 : Tensor? = prim::Constant(), scope: GRU
%20 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%21 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%22 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%23 : Tensor = onnx::Concat[axis=0](%20, %21, %22), scope: GRU
%24 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%25 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%26 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%27 : Tensor = onnx::Concat[axis=0](%24, %25, %26), scope: GRU
%28 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%29 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%30 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%31 : Tensor = onnx::Concat[axis=0](%28, %29, %30), scope: GRU
%32 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%33 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%34 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%35 : Tensor = onnx::Concat[axis=0](%32, %33, %34), scope: GRU
%36 : Tensor = onnx::Concat[axis=0](%31, %35), scope: GRU
%37 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%38 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%39 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%40 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%41 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%42 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%43 : Tensor = onnx::Concat[axis=0](%40, %41, %42), scope: GRU
%44 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%45 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%46 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%47 : Tensor = onnx::Concat[axis=0](%44, %45, %46), scope: GRU
%48 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%49 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%50 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%51 : Tensor = onnx::Concat[axis=0](%48, %49, %50), scope: GRU
%52 : Tensor = onnx::Sliceaxes=[0], ends=[40], starts=[20], scope: GRU
%53 : Tensor = onnx::Sliceaxes=[0], ends=[20], starts=[0], scope: GRU
%54 : Tensor = onnx::Sliceaxes=[0], ends=[60], starts=[40], scope: GRU
%55 : Tensor = onnx::Concat[axis=0](%52, %53, %54), scope: GRU
%56 : Tensor = onnx::Concat[axis=0](%51, %55), scope: GRU
%57 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%58 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%59 : Tensor = onnx::Unsqueezeaxes=[0], scope: GRU
%60 : Tensor = onnx::Concat[axis=0](%37, %57), scope: GRU
%61 : Tensor = onnx::Concat[axis=0](%38, %58), scope: GRU
%62 : Tensor = onnx::Concat[axis=0](%39, %59), scope: GRU
%63 : Tensor, %64 : Float(2, 3, 20) = onnx::GRU[direction="bidirectional", hidden_size=20, linear_before_reset=1](%input, %60, %61, %62, %19, %18), scope: GRU
%65 : Tensor = onnx::Transposeperm=[0, 2, 1, 3], scope: GRU
%66 : Tensor = onnx::Constantvalue= 0 0 -1 [ Variable[CPUType]{3} ], scope: GRU
%67 : Float(5, 3, 40) = onnx::Reshape(%65, %66), scope: GRU
return (%67, %64)

x2paddle --framework=onnx --model=test.onnx --save_dir=./转paddle模型,会找不到%19这个节点,报错如下:
Traceback (most recent call last):
File "/home/zhangzexin/anaconda3/envs/paddle/bin/x2paddle", line 10, in
sys.exit(main())
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle/convert.py", line 211, in main
onnx2paddle(args.model, args.save_dir)
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle/convert.py", line 154, in onnx2paddle
model = ONNXDecoder(model_path)
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle/decoder/onnx_decoder.py", line 334, in init
self.standardize_variable_name(model.graph)
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle/decoder/onnx_decoder.py", line 501, in standardize_variable_name
node.input[i] = self.make_variable_name(node.input[i])
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle/decoder/onnx_decoder.py", line 478, in make_variable_name
raise ValueError('name should not be empty')
ValueError: name should not be empty

已看github上一个gru的issue,看到推荐是使用nn.GRUCell的方式,但由于使用了nn.GRU封装的bidirection等属性,想知道(1)现在nn.GRU是一定没有办法转成功嘛,(2)有没有办法在不重新训练模型的情况下转成paddle呢~非常期待回复,感谢!

加载tf的meta文件,输出是个tensor的列表,转换失败

python tf2fluid/convert.py --meta_file=${meta_file}
5 --ckpt_dir=./textnet-part-vgg-synth-8gpu-tf-base/
6 --in_nodes=input_images,input_proposals,input_decoders,input_target_weights
7 --input_shape=None,512,512,3
8 --input_format=NHWC
9 --output_nodes=model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection/AttnOutputProjection/BiasAdd,model_wo_buckets/embedd ing_attention_decoder/attention_decoder/AttnOutputProjection_1/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/ AttnOutputProjection_2/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_3/AttnOutputProject ion/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_4/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_a ttention_decoder/attention_decoder/AttnOutputProjection_5/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnO utputProjection_6/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_7/AttnOutputProjection/B iasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_8/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attent ion_decoder/attention_decoder/AttnOutputProjection_9/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutput Projection_10/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_11/AttnOutputProjection/Bias Add,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_12/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attentio n_decoder/attention_decoder/AttnOutputProjection_13/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputP rojection_14/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_15/AttnOutputProjection/BiasA dd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_16/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention decoder/attention_decoder/AttnOutputProjection_17/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputPr ojection_18/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_19/AttnOutputProjection/BiasAd d,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_20/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention decoder/attention_decoder/AttnOutputProjection_21/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputPro jection_22/AttnOutputProjection/BiasAdd,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_23/AttnOutputProjection/BiasAdd ,model_wo_buckets/embedding_attention_decoder/attention_decoder/AttnOutputProjection_24/AttnOutputProjection/BiasAdd
10 --use_cuda=True
11 --save_dir=./save_fluid_model/ \

TypeError: OpDesc() missing 1 required positional argument: 'attrs'

[ INFO]onnx2fluid::convert:0071: loading model: baidu_903_ep39.onnx ...
[ INFO]onnx2fluid::convert:0075: checking model ...
[ INFO]onnx2fluid::convert:0082: using opset version: 9
[ INFO]onnx2fluid::convert:0093: model has 588 ops
[ INFO]onnx2fluid::convert:0098: optimizing model ...
[ DEBUG]onnx2fluid.onnx_utils::polish_model:0342: builtin optimizations to perform in ONNX:
['eliminate_deadend', 'eliminate_identity', 'eliminate_nop_dropout', 'eliminate_nop_monotone_argmax', 'eliminate_nop_pad', 'eliminate_nop_transpose', 'eliminate_unused_initializer', 'extract_constant_to_initializer', 'fuse_add_bias_into_conv', 'fuse_bn_into_conv', 'fuse_consecutive_concats', 'fuse_consecutive_log_softmax', 'fuse_consecutive_reduce_unsqueeze', 'fuse_consecutive_squeezes', 'fuse_consecutive_transposes', 'fuse_matmul_add_bias_into_gemm', 'fuse_pad_into_conv', 'fuse_transpose_into_gemm', 'lift_lexical_references', 'nop']
[ INFO]onnx2fluid::convert:0104: folder baidu_903_ep39/ cleared
[ INFO]onnx2fluid::convert:0136: conversion started
[ DEBUG]onnx2fluid::convert:0146: translating op op_0(op_0) ai.onnx::Shape ...
[CRITICAL]onnx2fluid::convert:0163: conversion failed for:
['var_0'] -> ::Shape -> ['var_670']
Traceback (most recent call last):
File "/home/zhangzexin/anaconda3/envs/paddle/bin/onnx2fluid", line 11, in
load_entry_point('onnx2fluid==0.1.1', 'console_scripts', 'onnx2fluid')()
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/pkg_resources/init.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/pkg_resources/init.py", line 2843, in load_entry_point
return ep.load()
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/pkg_resources/init.py", line 2434, in load
return self.resolve()
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/pkg_resources/init.py", line 2440, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/main.py", line 121, in
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/cmdline.py", line 62, in main
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/conversion.py", line 164, in convert
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/conversion.py", line 159, in convert
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 328, in emit_op
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 232, in Op
File "/home/zhangzexin/anaconda3/envs/paddle/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/symbolic.py", line 2037, in Shape
TypeError: OpDesc() missing 1 required positional argument: 'attrs'

想问一下这个可能是什么问题呢~

minor problems

onnx转paddle失败

Now translating model from onnx to paddle.
model ir_version: 4, op version: 9
W1127 21:32:25.501857 54097 init.cc:125] Compiled with WITH_GPU, but no GPU found in runtime.
Traceback (most recent call last):
File "/home/yulu/anaconda3/envs/paddle/bin/onnx_infer", line 11, in
load_entry_point('x2paddle==0.6.0', 'console_scripts', 'onnx_infer')()
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/onnx_infer.py", line 39, in main
sess = rt.InferenceSession(model_dir)
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 29, in init
self._sess.load_model(path_or_bytes)
RuntimeError: [ONNXRuntimeError] : 1 : GENERAL ERROR : Load model from pd_model/tmp_data/onnx_model_infer.onnx failed:Type Error: Type parameter (T) bound to different types (tensor(float) and tensor(int64) in node (_1211).
Total nodes: 545
in (Constant -> _1146): attribute "shape" of _1146 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1161): attribute "shape" of _1161 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1162): attribute "shape" of _1162 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1171): attribute "shape" of _1171 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1174): attribute "shape" of _1174 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1177): attribute "shape" of _1177 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1181): attribute "shape" of _1181 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1184): attribute "shape" of _1184 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1187): attribute "shape" of _1187 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1190): attribute "shape" of _1190 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1193): attribute "shape" of _1193 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1201): attribute "shape" of _1201 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1224): attribute "shape" of _1224 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1227): attribute "shape" of _1227 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1230): attribute "shape" of _1230 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1234): attribute "shape" of _1234 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1242): attribute "shape" of _1242 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1255): attribute "shape" of _1255 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1262): attribute "shape" of _1262 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1275): attribute "shape" of _1275 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1285): attribute "shape" of _1285 not inferred, using value as 1-D tensor may lead to fails
in (Constant -> _1286): attribute "shape" of _1286 not inferred, using value as 1-D tensor may lead to fails
Traceback (most recent call last):
File "/home/yulu/anaconda3/envs/paddle/bin/x2paddle", line 11, in
load_entry_point('x2paddle==0.6.0', 'console_scripts', 'x2paddle')()
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/convert.py", line 233, in main
onnx2paddle(args.model, args.save_dir, params_merge)
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/convert.py", line 170, in onnx2paddle
mapper = ONNXOpMapper(model, save_dir)
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 90, in init
func(node)
File "/home/yulu/anaconda3/envs/paddle/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 513, in Unsqueeze
if len(val_x.out_shapes[0]) == 0:
TypeError: object of type 'NoneType' has no len()

Is caffe slice layer supported?

When trying to convert caffe to paddle, I got following error.

squeeze idx:1, with kind:Convolution,name:conv1 Traceback (most recent call last): File "/home/turing/anaconda3/bin/x2paddle", line 10, in <module> sys.exit(main()) File "/home/turing/anaconda3/lib/python3.6/site-packages/x2paddle/convert.py", line 163, in main args.caffe_proto) File "/home/turing/anaconda3/lib/python3.6/site-packages/x2paddle/convert.py", line 101, in caffe2paddle mapper = CaffeOpMapper(model) File "/home/turing/anaconda3/lib/python3.6/site-packages/x2paddle/op_mapper/caffe_op_mapper.py", line 44, in __init__ self.set_node_shape(node) File "/home/turing/anaconda3/lib/python3.6/site-packages/x2paddle/op_mapper/caffe_op_mapper.py", line 80, in set_node_shape input_shape.append(last_node.output_shape[idx]) IndexError: list index out of range

Hope to any suggestions.
Following is my prototxt.

name: "model"
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 1
dim: 1
dim: 128
dim: 128
}
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "slice1"
type: "Slice"
bottom: "conv1"
top: "slice1_1"
top: "slice1_2"
slice_param {
slice_dim: 1
}
}
...

IndexError: list index out of range?

Hi,

x2paddle conversion gives this error. I used Ubuntu18.04 in VitrualBox.

Model: ssd_mobilenet_custom.zip

x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=coco/ssd_mobilenet_custom.caffemodel --save_dir=pd_model
Now translating model from caffe to paddle.
Total nodes: 268
Traceback (most recent call last):
  File "/home/ghimire/.local/bin/x2paddle", line 11, in <module>
    sys.exit(main())
  File "/home/ghimire/.local/lib/python2.7/site-packages/x2paddle/convert.py", line 227, in main
    args.caffe_proto, params_merge)
  File "/home/ghimire/.local/lib/python2.7/site-packages/x2paddle/convert.py", line 146, in caffe2paddle
    mapper = CaffeOpMapper(model)
  File "/home/ghimire/.local/lib/python2.7/site-packages/x2paddle/op_mapper/caffe_op_mapper.py", line 51, in __init__
    self.deal_custom_layer(node)
  File "/home/ghimire/.local/lib/python2.7/site-packages/x2paddle/op_mapper/caffe_op_mapper.py", line 944, in deal_custom_layer
    input = self.graph.get_bottom_node(input, idx=0, copy=True)
  File "/home/ghimire/.local/lib/python2.7/site-packages/x2paddle/decoder/caffe_decoder.py", line 204, in get_bottom_node
    input_node_name = node.inputs[idx]
IndexError: list index out of range

Please help.

Best,
Deepak

error

(pytorch1_1_0) boyun@boyun:~/software/ubuntu-wine/BaiduYunDownload/X2Paddle-master/onnx2fluid$ onnx2fluid resnet18.onnx -t sample_1.npz
[ INFO]convert::convert:0051: loading model: resnet18.onnx ...
[ INFO]convert::convert:0054: checking model ...
[ INFO]convert::convert:0072: model has 296 ops
[ INFO]convert::convert:0073: optimizing model ...
[ INFO]convert::convert:0082: folder resnet18/ cleared
[ INFO]convert::convert:0113: conversion started
[CRITICAL]convert::convert:0136: conversion failed for:
['646', '653'] -> ::Reshape -> ['654']
Traceback (most recent call last):
File "/home/boyun/anaconda3/envs/pytorch1_1_0/bin/onnx2fluid", line 11, in
load_entry_point('onnx2fluid==0.1.1', 'console_scripts', 'onnx2fluid')()
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/pkg_resources/init.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/pkg_resources/init.py", line 2843, in load_entry_point
return ep.load()
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/pkg_resources/init.py", line 2434, in load
return self.resolve()
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/pkg_resources/init.py", line 2440, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/main.py", line 106, in
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/cmdline.py", line 60, in main
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/conversion.py", line 137, in convert
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/conversion.py", line 132, in convert
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 312, in emit_op
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 244, in Op
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/symbolic.py", line 1559, in Reshape
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 200, in OpDesc
File "/home/boyun/anaconda3/envs/pytorch1_1_0/lib/python3.6/site-packages/onnx2fluid-0.1.1-py3.6.egg/onnx2fluid/writer.py", line 152, in OpDescAttrs
ValueError: unsupported attribute shape = []

The op DepthwiseConvolution in model is not supported yet

register layer[ROIPooling]
register layer[PriorBox]
register layer[Permute]
register layer[DetectionOutput]
register layer[Normalize]
register layer[Select]
register layer[ShuffleChannel]
register layer[ConvolutionDepthwise]
register layer[Axpy]
Now translating model from caffe to paddle.
...
Traceback (most recent call last):
  File "/Users//anaconda3/bin/x2paddle", line 10, in <module>
    sys.exit(main())
  File "/Users//anaconda3/lib/python3.5/site-packages/x2paddle/convert.py", line 201, in main
    args.caffe_proto)
  File "/Users//anaconda3/lib/python3.5/site-packages/x2paddle/convert.py", line 127, in caffe2paddle
    mapper = CaffeOpMapper(model)
  File "/Users//anaconda3/lib/python3.5/site-packages/x2paddle/op_mapper/caffe_op_mapper.py", line 55, in __init__
    "The op {} in model is not supported yet.".format(op))
Exception: The op DepthwiseConvolution in model is not supported yet.

看到注册的layer里有ConvolutionDepthwise,能直接映射过去吗?

使用x2paddle转换mobilenet(tensorflow),报错

转换命令:x2paddle -f tensorflow -m frozen_inference_graph.pb -s pd_model/ --without_data_format_optimization --define_input_shape
报错情况:
Now translating model from tensorflow to paddle.
Define shape[now is [-1L, -1L, -1L, 3L]] for input tensor[tensor name: "image_tensor']
Use your keyboard type the shape of input tensor below :)
Shape of Input(e.g. None,224,224,3): None,224,224,3
Traceback (most recent call last):
File "/usr/local/bin/x2paddle", line 11, in
load_entry_point('x2paddle==0.6.0', 'console_scripts', 'x2paddle')()
File "/usr/local/lib/python2.7/dist-packages/x2paddle-0.6.0-py2.7.egg/x2paddle/convert.py", line 219, in main
define_input_shape, params_merge)
File "/usr/local/lib/python2.7/dist-packages/x2paddle-0.6.0-py2.7.egg/x2paddle/convert.py", line 106, in tf2paddle
model = TFDecoder(model_path, define_input_shape=define_input_shape)
File "/usr/local/lib/python2.7/dist-packages/x2paddle-0.6.0-py2.7.egg/x2paddle/decoder/tf_decoder.py", line 263, in init
self.tf_graph.build()
File "/usr/local/lib/python2.7/dist-packages/x2paddle-0.6.0-py2.7.egg/x2paddle/decoder/tf_decoder.py", line 123, in build
format(in_node, layer_name))
Exception: input[^Preprocessor_map_while_Identity] of node[Preprocessor_map_while_add_y] does not exist in node_map

测试模型及环境配置说明见百度网盘:
链接:https://pan.baidu.com/s/1LTRQdLbGk35KZGL7Mloy_g
提取码:jm0u
复制这段内容后打开百度网盘手机App,操作更方便哦

master caffe2fluid net_template.py有问题

应修改如下
diff --git a/caffe2fluid/kaffe/net_template.py b/caffe2fluid/kaffe/net_template.py
index f9387c9..c6810c6 100644
--- a/caffe2fluid/kaffe/net_template.py
+++ b/caffe2fluid/kaffe/net_template.py
@@ -100,7 +100,7 @@ def main():
npy_weight = args.npy_path
fluid_model = args.model_param_path
outputs = None

  • if len(sys.argv) >= 6:
  • if args.need_layers_name:
    outputs = args.need_layers_name.split(',')

    ret = MyNet.convert(npy_weight, fluid_model, outputs)

can not support get_dynamic_shape

i use dynamic shape in onnx model. and i have successfully convert to onnx model. when convert to paddle, the error is:

 File "/root/miniconda3/envs/torch120/bin/x2paddle", line 11, in <module>
    load_entry_point('x2paddle==0.5.0', 'console_scripts', 'x2paddle')()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 211, in main
    onnx2paddle(args.model, args.save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 157, in onnx2paddle
    mapper = ONNXOpMapper(model, save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 65, in __init__
    self.get_output_shapes()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 157, in get_output_shapes
    _, dtype, shape = self.get_dynamic_shape(opt)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 134, in get_dynamic_shape
    output = np.load(os.path.join(self.tmp_data_dir, layer + '.npy'))

the model i used is:链接: https://pan.baidu.com/s/1WrTbTyVmBBJrT6ZVffx_3g 提取码: y9hb

Ops are not supported yet:Sqrt,MirrorPad,Square

(paddle) bash-3.2$ x2paddle --framework=tensorflow --model=/Users/dev01/下载目录/测试model.pb/Angel_a025_c256_s512_a0.25_b2_sn16-32000_frozen.pb --save_dir=/Users/dev01/下载目录/paddle
Now translating model from tensorflow to paddle.
Total nodes: 468
Converting node 468 ... ==========3 Ops are not supported yet======
========== Sqrt ==========
========== MirrorPad ==========
========== Square ==========

don't support torch.tensor.expand?

when i use pytorch torch.tensor.expand, the error

Traceback (most recent call last):
  File "/root/miniconda3/envs/torch120/bin/onnx_infer", line 11, in <module>
    load_entry_point('x2paddle==0.5.0', 'console_scripts', 'onnx_infer')()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/onnx_infer.py", line 48, in main
    res = sess.run(None, input_feed=inputs_dict)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 72, in run
    return self._sess.run(output_names, input_feed, run_options)
RuntimeError: Method run failed due to: [ONNXRuntimeError] : 1 : GENERAL ERROR : /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:341 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. -1 by 264
Stacktrace:

There are 1 ops not supported yet, list as below
Expand
Traceback (most recent call last):
  File "/root/miniconda3/envs/torch120/bin/x2paddle", line 11, in <module>
    load_entry_point('x2paddle==0.5.0', 'console_scripts', 'x2paddle')()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 211, in main
    onnx2paddle(args.model, args.save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 157, in onnx2paddle
    mapper = ONNXOpMapper(model, save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 68, in __init__
    raise Exception("Model are not supported yet.")
Exception: Model are not supported yet.

it seems don't support expand?

加载转换好的模型时报错Enforce failed. Expected version == 0U, but received version:5 != 0U:0.

完整报错如下:
wrong3
Traceback (most recent call last):
File "E:/CNNNET/paddle/tensorflow2fluid/tf2fluid/predict.py", line 6, in
model = ml.ModelLoader("translated_paddle_model", use_cuda=True)
File "E:\CNNNET\paddle\tensorflow2fluid\tf2fluid\model_loader.py", line 42, in init
fluid.io.load_vars(self.exe, model_dir, vars=var_list)
File "D:\Anaconda3\lib\site-packages\paddle\fluid\io.py", line 610, in load_vars
executor.run(load_prog)
File "D:\Anaconda3\lib\site-packages\paddle\fluid\executor.py", line 565, in run
use_program_cache=use_program_cache)
File "D:\Anaconda3\lib\site-packages\paddle\fluid\executor.py", line 642, in _run
exe.run(program.desc, scope, 0, True, True, fetch_var_name)
paddle.fluid.core.EnforceNotMet: Invoke operator load error.
Python Callstacks:
File "D:\Anaconda3\lib\site-packages\paddle\fluid\framework.py", line 1654, in append_op
attrs=kwargs.get("attrs", None))
File "D:\Anaconda3\lib\site-packages\paddle\fluid\io.py", line 596, in load_vars
attrs={'file_path': os.path.join(dirname, new_var.name)})
File "E:\CNNNET\paddle\tensorflow2fluid\tf2fluid\model_loader.py", line 42, in init
fluid.io.load_vars(self.exe, model_dir, vars=var_list)
File "E:/CNNNET/paddle/tensorflow2fluid/tf2fluid/predict.py", line 6, in
model = ml.ModelLoader("translated_paddle_model", use_cuda=True)
C++ Callstacks:
Enforce failed. Expected version == 0U, but received version:5 != 0U:0.
Only version 0 is supported at [D:/1.4.1/paddle/paddle/fluid/framework/.tensor_util.cu:453]
PaddlePaddle Call Stacks:
Windows not support stack backtrace yet.
W0508 10:14:41.673883 22764 device_context.cc:261] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 9.2, Runtime API Version: 8.0
W0508 10:14:41.679867 22764 device_context.cc:269] device: 0, cuDNN Version: 7.0.

AttributeError: module 'model' has no attribute 'x2paddle_net'

lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/core/op_mapper.py", line 122, in save_inference_model
    inputs, outputs = model.x2paddle_net()
AttributeError: module 'model' has no attribute 'x2paddle_net'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "bin/x2paddle", line 11, in <module>
    load_entry_point('x2paddle==0.6.0', 'console_scripts', 'x2paddle')()
  File "/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/convert.py", line 233, in main
    onnx2paddle(args.model, args.save_dir, params_merge)
  File "/lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/convert.py", line 176, in onnx2paddle
    mapper.save_inference_model(save_dir, params_merge)
  File "lib/python3.6/site-packages/x2paddle-0.6.0-py3.6.egg/x2paddle/core/op_mapper.py", line 158, in save_inference_model
    .format(py_code_dir))
Exception: Paddle code was saved in ./model.py, but seems there's wrong exist, please check model.py manually.

ValueError: Object arrays cannot be loaded when allow_pickle=False

我是要做caffe2paddle,
使用convert.py转换那一步成功,想把npy参数和py网络转成paddle可读取的模型文件时出错,
我看问题是出在
data_dict = np.load(data_path).item()
因为这是生成的net.py里的,所以哪一行可能不确定,错误内容基本上是说np.load的时候设置了allow_pickle=False,这可能是默认参数。修改为
data_dict = np.load(data_path, allow_pickle=True).item()之后就可以成功导出模型了。
导出的只有两个文件,一个叫model,一个叫params。

don't support onnx::gather?

the onnx model used onnx::gather, detail:

%position_enc.weight : Float(264, 1024)):
  %2 : Float(1, 30, 1024) = onnx::Gather(%position_enc.weight, %x)

but it report an error:

Traceback (most recent call last):
  File "/root/miniconda3/envs/torch120/bin/x2paddle", line 11, in <module>
    load_entry_point('x2paddle==0.5.0', 'console_scripts', 'x2paddle')()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 211, in main
    onnx2paddle(args.model, args.save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 157, in onnx2paddle
    mapper = ONNXOpMapper(model, save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 81, in __init__
    func(node)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 500, in Gather
    indices_shape) <= 1, "Gather op don't support dim of indice >1 "
AssertionError: Gather op don't support dim of indice >1

the pytorch code is:

class test_model(nn.Module):
    def __init__(self, src_n_position=264, d_word_vec=1024):
        super().__init__()
        self.position_enc = nn.Embedding(src_n_position, d_word_vec)
        
    def forward(self, x):
        x = self.position_enc(x)
        
        return x
    
net2 = test_model()
x = torch.randint(30, (1, 30))
torch.onnx.export(net2, x, './Fonnx/tranformertwo.onnx', verbose=True, input_names=['x'])

pytorch GRU转paddle 报错

先用pytorch写了一个gru,  转成onnx, 再使用X2Paddle/onnx2fluid转的时候,报错onnx2fluid/writer.py", line 247, in Op ValueError: conversion for ::ConstantFill not supported

x2paddle transfer model of tensorflow, report error.

(venv-python36) PS D:\tensorflow-learning\venv-python36\Scripts> x2paddle -m D:\work\AI\model\mobilenet_v1_1.0_224_frozen\mobilenet_v1_1.0_224_frozen.pb -s D:\work\AI\PaddlePaddleLite -f tensorflow -f tensorflow

paddlepaddle not installed, use "pip install paddlepaddle"

1.0.0<=tensorflow<2.0.0 is required, and v1.14.0 is recommended

but paddlepaddle installed . And use "pip list" can see : "paddlepaddle 1.6.1"

'TENSOR_TYPE_TO_NP_TYPE'

x2paddle/op_mapper/onnx_op_mapper.py line511
NameError: name 'TENSOR_TYPE_TO_NP_TYPE' is not defined

pb模型无法读取input tensor name

已训练好的pb模型可以在tensorflow的sess.run中正常调用,在feed_dict字典中的input为
self.graph.get_tensor_by_name('prefix/input_1:0')
在convert.py中设置in_nodes为"prefix/input_1:0",报错如下
wrong1
修改in_nodes为‘input_1’,报错相同。

onnx转paddle,Upsample动态shape

模型转换Pytorch->onnx->paddle。

  1. Pytorch代码中对应部分代码:
def forward(self, x):
    n,c,h,w = x.shape
    up = nn.functional.interpolate(rb2, size=((h+1)//2, (w+1)//2), mode='bilinear', align_corners=False) 
    return up
  1. 转onnx时候,Upsample对应层是:
    %827 : Float(1, 576, 4, 35) = onnx::Upsample[mode="linear"](%808, %826),其中%826是计算动态shape

  2. 使用develop分支的X2Paddle转,在model = ONNXDecoder(model_path)这步里有一个model = onnx.shape_inference.infer_shapes(model),onnx貌似不支持动态的shape推断,打印出来Upsample的四个dim都是空

  3. 由于3中所述,在onnx转paddle的时,Upsample转成paddle的out_shape=[0,0],转出来的model.py对应层是_827 = fluid.layers.resize_bilinear(_808, scale=None, out_shape=[0, 0], name='_827'),这里手动把2中的mode=“linear”改成了paddle支持的“bilinear”。

我看到paddle的文档中写了“根据指定的out_shape执行双线性插值调整输入大小,输出形状按优先级由actual_shape、out_shape和scale指定。”其中acutal_shape是动态图时用到的。
我现在想法是在ONNXOpMapper这里改,Upsample层对应的是_interpolate函数,但不清楚用actual_shape参数要怎么改,或者有更方便的改法嘛~
其中_interpolate函数里层的现在是提供scale和out_shape的参数,如下:

        attr = {
            'scale': scale,
            'out_shape': out_shape,
            'name': string(node.layer_name)
        }
        node.fluid_code.add_layer(fluid_op,
                                  inputs=val_x,
                                  output=node,
                                  param_attr=attr)

google.protobuf.message.DecodeError: Error parsing message

Traceback (most recent call last):
File "/usr/local/bin/x2paddle", line 11, in
load_entry_point('x2paddle==0.6.0', 'console_scripts', 'x2paddle')()
File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.6.0-py3.5.egg/x2paddle/convert.py", line 227, in main
args.caffe_proto, params_merge)
File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.6.0-py3.5.egg/x2paddle/convert.py", line 145, in caffe2paddle
model = CaffeDecoder(proto, weight, caffe_proto)
File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.6.0-py3.5.egg/x2paddle/decoder/caffe_decoder.py", line 228, in init
self.load_using_pb()
File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.6.0-py3.5.egg/x2paddle/decoder/caffe_decoder.py", line 237, in load_using_pb
data.MergeFromString(open(self.model_path, 'rb').read())
google.protobuf.message.DecodeError: Error parsing message

This issue happen when convert a caffe model to paddle paddle model, any specified requirement on google protobuf? I am installing the python3 protobuf.

AssertionError: The parameter of my_classifier (type is InnerProduct) is not set. You need to use python package of caffe to set the default value.

转换resnet50模型(来自caffe-model-zoo),报错

register layer[ROIPooling]
register layer[PriorBox]
register layer[Permute]
register layer[DetectionOutput]
register layer[Normalize]
register layer[Select]
register layer[ShuffleChannel]
register layer[ConvolutionDepthwise]
register layer[Axpy]
Now translating model from caffe to paddle.
cost: 1.406264305114746
Ignoring parameters for non-existent layer: fc1000
Total nodes: 229
squeeze idx:1, with kind:Convolution,name:conv1
Traceback (most recent call last):
  File "/usr/local/bin/x2paddle", line 9, in <module>
    load_entry_point('x2paddle==0.4.5', 'console_scripts', 'x2paddle')()
  File "/home/***/CNN/X2Paddle/x2paddle/convert.py", line 211, in main
    args.caffe_proto)
  File "/home/***/CNN/X2Paddle/x2paddle/convert.py", line 137, in caffe2paddle
    mapper = CaffeOpMapper(model)
  File "/home/***/CNN/X2Paddle/x2paddle/op_mapper/caffe_op_mapper.py", line 46, in __init__
    func(node)
  File "/home/***/CNN/X2Paddle/x2paddle/op_mapper/caffe_op_mapper.py", line 346, in InnerProduct
    node.layer_name, node.layer_type)
AssertionError: The parameter of my_classifier (type is InnerProduct) is not set. You need to use python package of caffe to set the default value.

转换用的命令为:

x2paddle --framework=caffe --prototxt=resnet50.prototxt --weight=resnet50.caffemodel --save_dir=pd_model  --caffe_proto=***/build/include/caffe/proto/caffe_pb2.py

0.3之前用的caffe2fluid还好好的呢,怎么升级了就不兼容了?

develop分支能否支持单向GRU呢

我看到paddle实现双向GRU的方式是:

    fc_1 = fluid.layers.fc(input=x,
                           size=rnn_hidden_size * 3,
                           param_attr=para_attr,
                           bias_attr=bias_attr_nobias)
    fc_2 = fluid.layers.fc(input=x,
                           size=rnn_hidden_size * 3,
                           param_attr=para_attr,
                           bias_attr=bias_attr_nobias)
    gru_forward = fluid.layers.dynamic_gru(
        input=fc_1,
        size=rnn_hidden_size,
        param_attr=para_attr,
        bias_attr=bias_attr,
        candidate_activation='relu')
    gru_backward = fluid.layers.dynamic_gru(
        input=fc_2,
        size=rnn_hidden_size,
        is_reverse=True,
        param_attr=para_attr,
        bias_attr=bias_attr,
        candidate_activation='relu')

那对于单向的Pytorch上的代码是:

model = nn.GRU(10, 20, 2)
input = torch.randn(5, 3, 10)
#h0 = torch.zeros(2, 3, 20)
yp = model(input)
torch.onnx.export(model, input, 'gru.onnx',verbose=True)

转后的onnx模型是
链接:https://pan.baidu.com/s/1NtYNN5DSALo-tLrGFDlGxA
提取码:897a

想问一下目前develop分支能否支持单向GRU

ConstantFill op的转换失败

使用X2Paddle/onnx2fluid转一个gru的时候,报错onnx2fluid/writer.py", line 247, in Op
ValueError: conversion for ::ConstantFill not supported。希望能够支持对ConstantFill op的转换。

pytorch ssd模型导出onnx 转paddle出错

Now translating model from onnx to paddle.
model ir_version: 4, op version: 9
Total nodes: 98
Traceback (most recent call last):
File "/home/zt/androidprojects/X2Paddle-develop/x2paddle/core/op_mapper.py", line 122, in save_inference_model
inputs, outputs = model.x2paddle_net()
File "pd_model/model_with_code/model.py", line 7, in x2paddle_net
_107 = fluid.layers.create_parameter(dtype='float32', shape=[], name='_107', attr='_107', default_initializer=Constant(0.0))
File "/home/zt/tools/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/paddle/fluid/layers/tensor.py", line 103, in create_parameter
default_initializer)
File "/home/zt/tools/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/paddle/fluid/layer_helper_base.py", line 330, in create_parameter
**attr._to_kwargs(with_initializer=True))
File "/home/zt/tools/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2384, in create_parameter
param = Parameter(global_block, *args, **kwargs)
File "/home/zt/tools/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/paddle/fluid/framework.py", line 4471, in init
"The dimensions of shape for Parameter must be greater than 0")
ValueError: The dimensions of shape for Parameter must be greater than 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/zt/tools/anaconda3/envs/pytorch1.3/bin/x2paddle", line 11, in
load_entry_point('x2paddle==0.6.0', 'console_scripts', 'x2paddle')()
File "/home/zt/androidprojects/X2Paddle-develop/x2paddle/convert.py", line 233, in main
onnx2paddle(args.model, args.save_dir, params_merge)
File "/home/zt/androidprojects/X2Paddle-develop/x2paddle/convert.py", line 176, in onnx2paddle
mapper.save_inference_model(save_dir, params_merge)
File "/home/zt/androidprojects/X2Paddle-develop/x2paddle/core/op_mapper.py", line 158, in save_inference_model
.format(py_code_dir))
Exception: Paddle code was saved in pd_model/model_with_code/model.py, but seems there's wrong exist, please check model.py manually.

onnx文件先经过onnx-simplifier-master工具转化过,出错的地方为L2Norm部分,不知道是不是因为_107处常量没有scope.

%106 : Float(1, 1024, 19, 19) = onnx::Relu(%105), scope: SSD/VGG16[base]/ReLU # /home/zt/tools/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/torch/nn/functional.py:914:0
%107 : Float() = onnx::Constantvalue={2}
%108 : Float(1, 512, 38, 38) = onnx::Pow(%94, %107), scope: SSD/L2Norm[L2Norm] # /home/zt/PycharmProjects/fssd.pytorch-master/models/SSD_vgg.py:268:0

屏幕截图

caffe2fluid模型加载与预测代码问题

将Caffe模型转成Paddle模型,参考https://github.com/PaddlePaddle/X2Paddle/tree/master/caffe2fluid,执行完第一步和第二步后,生成的文件如下:
image
文档上写模型的加载及预测可参考PaddlePaddle官方文档: https://www.paddlepaddle.org.cn/documentation/docs/zh/1.3/api_guides/low_level/inference.html#id4
但你们参考的文档上模型目录下需要包含 ./infer_model/ __ model __ 文件,Caffe模型转Paddle模型的文件夹中并没有该文件。请问怎么解决,或者有这个Inference的代码参考一下吗?
@SunAhong1993 @jiangjiajun

don't support onnx::equal and onnx::where?

it seems don't support equal and where operater
the error:

There are 2 ops not supported yet, list as below
Where
Equal
Traceback (most recent call last):
  File "/root/miniconda3/envs/torch120/bin/x2paddle", line 11, in <module>
    load_entry_point('x2paddle==0.5.0', 'console_scripts', 'x2paddle')()
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 215, in main
    onnx2paddle(args.model, args.save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/convert.py", line 161, in onnx2paddle
    mapper = ONNXOpMapper(model, save_dir)
  File "/root/miniconda3/envs/torch120/lib/python3.6/site-packages/x2paddle-0.5.0-py3.6.egg/x2paddle/op_mapper/onnx_op_mapper.py", line 76, in __init__
    raise Exception("Model are not supported yet.")
Exception: Model are not supported yet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.