Coder Social home page Coder Social logo

paddlepaddle / paddle.js Goto Github PK

View Code? Open in Web Editor NEW
938.0 92.0 131.0 92.84 MB

Paddle.js is a web project for Baidu PaddlePaddle, which is an open source deep learning framework running in the browser. Paddle.js can either load a pre-trained model, or transforming a model from paddle-hub with model transforming tools provided by Paddle.js. It could run in every browser with WebGL/WebGPU/WebAssembly supported. It could also run in Baidu Smartprogram and WX miniprogram.

Home Page: https://paddlejs.baidu.com

License: Apache License 2.0

JavaScript 67.27% HTML 2.49% Python 2.52% TypeScript 26.81% Vue 0.87% Shell 0.05%
webgl webgpu webassembly model inference-engine paddlepaddle ocr deep-learning

paddle.js's People

Contributors

andy-zhangtao avatar benann avatar changy1105 avatar changying01 avatar dependabot[bot] avatar haozech avatar hu-qi avatar jingyuanzhang avatar jinzhan avatar leoqh96 avatar raindrops2sea avatar shadedbluebird avatar wangqunbaidu avatar yueshuangyan avatar zhongkai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddle.js's Issues

Interactive image segmentation

Hi team, I would like to ask if you have a plan to add an example of interactive object segmentation?
Thanks for your good work.

PPYOLO预训练模型转换成Paddlejs后出现问题

你好,我在使用PPYolo或PaddleHub中的与训练模型转换Paddlejs之后都有出现length的问题,请问如何解决呢?
Uncaught (in promise) TypeError: Cannot read property 'length" of undefined
at graph.es6:354
at Array. filter ( )
at Graph. getNextExecutor (graph .es6:353)
at graph.es6:225
at Array .map ( )
at Graph. constructOpsMap (graph.es6:223)
at Paddle.preGraph (paddle.es6:52)
at Paddle._ callee$ (paddle.es6:41)
at tryCatch (runtime.js:65)
at Generator . invoke [as_ invoke] (runtime .js:303)

image

Feature Request: WebAssembly backend support

Hi, I am doing some work in Webassembly and I find it is widely used in machine learning field. Other machine learning frameworks like TVM and tensorflow already have WebAssembly backend support.
I wonder if there is any plan for WebAssembly backend support? or what should I do if I wanna add WebAssembly support for Paddle.js.

预测结果始终不变

image

使用 paddleClass训练了一个多分类模型(5个分类),在python后端正常,转成js的版本后,预测结果始终一样,数据预处理看上去正常

Function/eval calls violate strict CSP

The webpack configurations for TS packages emit code like g = g || Function("return this")() || (1,eval)("this");, which cannot run in security-critical contexts with strict CSP; e.g. in Browser extensions.

This can be fixed by adding devtool: "source-map", node: { global: false } to the webpack configuration (see webpack/webpack#5627).

Additionally, the @paddlejs-mediapipe/opencv is built by emscripten to include Function code evaluation. Emscripten needs parameters -s NO_DYNAMIC_EXECUTION=1 to disable use of Function/eval (see this answer on stackoverflow).

前端模型无法正常加载

src.a2b27638.js:122 Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4
at new Float32Array ()
at src.a2b27638.js:122
at Array.forEach ()
at src.a2b27638.js:122

定位 src/index.js第40行,前端模型无法加载

是否支持WebWorker OffscreenCanvas

如题,支持WebWorker的OffscreenCanvas可以避免主界面的堵塞。
进行源码搜索时并没有见到相关的实现代码,所以在此提出建议。

Converting models with multiple outputs (pp tiny_pose) fails. 'dict' object has no attribute 'append', With fix attached.

Hi,

Hit error below when using converter script.

python3 convertToPaddleJSModel.py --modelPath=tinypose_128x96/model.pdmodel --paramPath=tinypose_128x96/model.pdiparams --outputDir=tinypose_128x96/out
Organizing model variables info successfully.
A fetal error occured. Failed to convert model.
Traceback (most recent call last):
  File "/Users/doge/Desktop/learning/pp/Paddle.js/packages/paddlejs-converter/convertModel.py", line 491, in <module>
    convertToPaddleJSModel()
  File "/Users/doge/Desktop/learning/pp/Paddle.js/packages/paddlejs-converter/convertModel.py", line 431, in convertToPaddleJSModel
    appendConnectOp(fetch_targets)
  File "/Users/doge/Desktop/learning/pp/Paddle.js/packages/paddlejs-converter/convertModel.py", line 402, in appendConnectOp
    vars.append(outputVar)
AttributeError: 'dict' object has no attribute 'append'

Quick fix:

in convertModel.py, change line 402

#vars.append(outputVar)
vars[outputVar['name']] = outputVar

and the problem goes away.

Also, should fetal be fatal?

Thanks!

paddle.js 用什么方案保护算法模型

我理解的paddle.js 是运行在浏览器端的,没有特别好的办法去做加解密等隐藏的操作,那么在这种情况下,我们怎么去保护训练好的模型和算法呢?

文档错误

https://github.com/PaddlePaddle/Paddle.js/blob/master/tools/ModelConverter/README_cn.md

其中

1.2. 快速上手

  • 如果待转换的 fluid 模型为合并参数文件,即一个模型对应一个参数文件:
python convertToPaddleJSModel.py --modelPath=<fluid_model_file_path> --paramPath=<fluid_param_file_path> --outputDir=<paddlejs_model_directory>
  • 如果待转换的 fluid 模型为分片参数文件,即一个模型文件对应多个参数文件:
# 注意,使用这种方式调用转换器,需要保证 inputDir 中,模型文件名为'__model__'
python convertToPaddleJSModel.py --inputDir=<fluid_model_directory> --outputDir=<paddlejs_model_directory>

应该改为 
### 1.2. 快速上手
- 如果待转换的 fluid 模型为`分片参数文件`,即一个模型文件对应多个参数文件:
``` bash
python convertToPaddleJSModel.py --modelPath=<fluid_model_file_path> --paramPath=<fluid_param_file_path> --outputDir=<paddlejs_model_directory>
- 如果待转换的 fluid 模型为`合并参数文件`,即一个模型对应一个参数文件:
# 注意,使用这种方式调用转换器,需要保证 inputDir 中,模型文件名为'__model__'
python convertToPaddleJSModel.py --inputDir=<fluid_model_directory> --outputDir=<paddlejs_model_directory>

在aistudio上实测:
ll output/mobilenetv3_large_ssld_Infer
total 17184
drwxr-xr-x 2 aistudio aistudio     4096 Jun 25 11:02 ./
drwxr-xr-x 6 aistudio aistudio     4096 Jun 25 14:00 ../
-rw-r--r-- 1 aistudio aistudio   667718 Jun 25 11:04 __model__
-rw-r--r-- 1 aistudio aistudio      622 Jun 25 11:04 model.yml
-rw-r--r-- 1 aistudio aistudio 16911818 Jun 25 11:04 __params__
-rw-r--r-- 1 aistudio aistudio        0 Jun 25 11:04 .success

`! python convertToPaddleJSModel.py --modelPath=/home/aistudio/output/mobilenetv3_large_ssld_Infer/__model__ --paramPath=/home/aistudio/output/mobilenetv3_large_ssld_Infer/__params__ --outputDir=/home/aistudio/output/mobilenetv3_large_ssld_paddlejs`

`============Convert Model Args=============
modelPath: /home/aistudio/output/mobilenetv3_large_ssld_Infer/__model__
paramPath: /home/aistudio/output/mobilenetv3_large_ssld_Infer/__params__
outputDir: /home/aistudio/output/mobilenetv3_large_ssld_paddlejs
enableOptimizeModel: 0
enableLogModelInfo: 0
sliceDataSize:4096
Starting...
You choosed not to optimize model, consequently, optimizing model is skiped.
Converting model...
Organizing model operators info...
Organizing model operators info successfully.
Organizing model variables info...
Organizing model variables info successfully.
Dumping model structure to json file...
Dumping model structure to json file successfully
Output No.1 binary file, remain 3177624 param values.
Output No.2 binary file, remain 2129048 param values.
Output No.3 binary file, remain 1080472 param values.
Output No.4 binary file, remain 31896 param values.
Output No.5 binary file, remain 0 param values.
Slicing data to binary files successfully. (5 output files and 4226200 param values)
Converting model successfully.
============ALL DONE============
`
`$ ll output/mobilenetv3_large_ssld_paddlejs
total 16868
drwxr-xr-x 2 aistudio aistudio    4096 Jun 25 14:00 ./
drwxr-xr-x 6 aistudio aistudio    4096 Jun 25 14:00 ../
-rw-r--r-- 1 aistudio aistudio 4194304 Jun 25 14:00 chunk_1.dat
-rw-r--r-- 1 aistudio aistudio 4194304 Jun 25 14:00 chunk_2.dat
-rw-r--r-- 1 aistudio aistudio 4194304 Jun 25 14:00 chunk_3.dat
-rw-r--r-- 1 aistudio aistudio 4194304 Jun 25 14:00 chunk_4.dat
-rw-r--r-- 1 aistudio aistudio  127584 Jun 25 14:00 chunk_5.dat
-rw-r--r-- 1 aistudio aistudio  355062 Jun 25 14:00 model.json`

但是 python convertToPaddleJSModel.py --inputDir=<fluid_model_directory> --outputDir=<paddlejs_model_directory>
不知道如何使用,请指导一下,谢谢!

导入转换后的模型后出现如下问题:

Uncaught (in promise) TypeError: Failed to execute 'getAttribLocation' on 'WebGL2RenderingContext': parameter 1 is not of type 'WebGLProgram'.
at t.runVertexShader (index.js:1)
at t.setProgram (index.js:1)
at index.js:1
at Array.forEach ()
at e.runProgram (index.js:1)
at t.execute (index.js:1)
at t.executeOp (index.js:1)
at t.executeOp (index.js:1)
at t. (index.js:1)
at index.js:1

v2.2.0模型转换失败问题

如题,在使用v2.2.0版本的模型转换脚本 convertToPaddleJSModel.py ,对开放下载的OCR识别模型 ch_PP-OCRv2_rec_infer 做转换时,出现如下错误:

A fetal error occured. Failed to convert model.
Traceback (most recent call last):
  File "convertModel.py", line 541, in main
    convertToPaddleJSModel(modelDir, modelName, paramsName, outputDir)
  File "convertModel.py", line 465, in convertToPaddleJSModel
    rnn.splice_rnn_op(modelInfo, index)
  File "/Users/xiaomingwang/PaddleSpace/demo/Paddle.js-release-v2.2.0/packages/paddlejs-converter/rnn.py", line 130, in splice_rnn_op
    input_shape = vars[rnn_input_name]['shape']
KeyError: 'lstm_0_33.tmp_concat'

但使用v2.1.0版本的脚本做转换时,不会出现该问题。但该版本转换出的模型会在运行时报错( rnn op 相关问题 )。改版本输出的model.json 与 v2.2.0版本ocr示例工程中使用的远程识别模型结构不一致。

目前v2.2.0版本中ocr示例工程中使用的识别模型是如何转换得到的。

对模型 ch_PP-OCRv2_rec_infer 的转换问题要如何解决。

通过finetune训练的mobienet,通过转换工具后softmax变成了1000分类

通过finetune训练的mobienet模型,在python上运行结果是自己的分类的20种。
但是通过转换工具生成的model.json中softmax等于imagenet的1000分类,js运行得到的结果也是1000分类
{
"name": "softmax_0.tmp_0",
"persistable": false,
"shape": [
1000
]
}
这是什么情况?
模型训练是在aistudio上通过paddleclas finetune的,训练完成后通过predict出现的结果是正确的。

引入微信小程序时报错

引入微信小程序时报错Cannot read property 'userAgent' of undefined,将其注释后其他方法也报错。
图片

小程序运行到buildOp时报 Cannot read property 'params' of undefined

平台:微信小程序
版本:[email protected]
问题:当运行到 flatten_contiguous_range 时,报 Cannot read property 'params' of undefined 错误。 请问下,是不支持 Flatten 算子吗?请问下,那MobileNetV1的模型网络是否支持,谢谢。
错误代码片段:
key: "buildOp",
value: function value(t) {
console.log(t);
var e = this,
n = N.ops[t].params,
r = N.atoms;
return (N.ops[t].confs.dep || []).map(function (t) {
var o = t.func,
i = t.conf,
a = r[o];
n += e.populateData(a, i);
}), n += this.buildSuffix(t), n += N.ops[t].func;
} },
错误信息:
VM103 WAService.js:2 TypeError: Cannot read property 'params' of undefined
at t.value (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:12752)
at t.value (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:12729)
at vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:14209
at Array.map ()
at t.value (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:14208)
at vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:14574
at Array.forEach ()
at t.value (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:14571)
at t. (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:15252)
at u (vendor.js?t=wechat&s=1613963073045&v=194de92480d55cfc8fe810cb831eba14:15740)

example/gesture部署时 property 'RGBA' of null

前端异常如下:

Uncaught (in promise) TypeError: Cannot read property 'RGBA' of null
    at new t (index.js:1)
    at new t (index.js:1)
    at new t (index.js:1)
    at t.value (index.js:1)
    at t.<anonymous> (index.js:1)
    at u (index.js:1)
    at Generator._invoke (index.js:1)
    at Generator.forEach.t.(:8888/anonymous function) [as next] (https://localhost:8888/gesture.77a0a3b8.js:122:4238)
    at n (index.js:1)
    at s (index.js:1)

定位到代码错误在
image

进一步查找到,

//gl = canvas.getContext('webgl2') || canvas.getContext('experimental-webgl2');
    gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');

我替换为webgl2,也无法解决这个问题

如何改变example humanStream的backend

您好,我运行humanStream时遇到了以下错误:

Uncaught (in promise) TypeError: Failed to execute 'getAttribLocation' on 'WebGL2RenderingContext': parameter 1 is not of type 'WebGLProgram'.

本地没有GPU,请问这需要更改backend吗?如果需要请问如何修改?

启动examples中样例报错

运行humanseghumanStream样例时,都报错出Cannot resolve dependency,对应import cv from '../../opencv.js';语句,请问开始部署时我的操作问题还是本来就缺少对应的opencv.js的文件

paddlejs样例代码报错

\examples\videoDemo项目运行后报错:
报错信息为:
Uncaught (in promise) TypeError: Cannot read property '0' of undefined
at Runner._callee5$ (runner.es6:144)
at tryCatch (runtime.js:65)
at Generator.invoke [as _invoke] (runtime.js:303)
at Generator.prototype. [as next] (runtime.js:117)
at asyncGeneratorStep (runner.es6:17)
at _next (runner.es6:17)
项目中的模型是加载成功的,this.modelConfig.fetchShape为空。未定位到问题位置。还请帮忙查看下。

tiny Yolo model 测试问题

按照例子里https://github.com/PaddlePaddle/Paddle.js/tree/master/src/paddle的说明,起tinyYolo服务,报错:

Server running at http://localhost:1234
🚨 /home/aistudio/Paddle.js/examples/tinyYolo/index.es6:4:15: Cannot resolve dependency '../../src/feed/imageFeed' at '/home/aistudio/Paddle.js/src/feed/imageFeed'
2 | import 'babel-polyfill';
3 | import Paddle from '../../src/paddle/paddle';

4 | import IO from '../../src/feed/imageFeed';

将aistudio@jupyter-141218-528471:~/Paddle.js/src/feed$ 目录下的一个imageFeed.es6文件改名为不带后缀,不再报错。

但是在使用chrome和safari两种浏览器里,均只能看到上传照片按钮,上传照片后没有任何输出。 不知道什么原因。。

小程序报错TypeError: Cannot read property 'userAgent' of undefined

paddle npm版本
"paddlejs": "^1.0.17"

paddle微信小程序插件版本

"paddlejs-plugin": {
      "version": "0.0.3",
      "provider": "wx7138a7bb793608c3"
    }

代码参考
https://blog.csdn.net/weixin_45449540/article/details/108114307

报错信息
开始是app.js报错,关闭es6转es5选项之后报错消失

之后打开相关功能页面报错
Cannot read property 'userAgent' of undefined

或者能否提供一个可以跑通的示例,搜索小程序示例只能搜到上面一个示例

模型文件云存储后无法访问的问题

async function load() {
    const path = 'https://paddlejs.cdn.bcebos.com/models/mobileNetV2';
    await mobilenet.load({
        path,
        fileCount: 3,
        mean: [0.485, 0.456, 0.406],
        std: [0.229, 0.224, 0.225]
    }, map);
    document.getElementById('loading')!.style.display = 'none';
}

官方demo中的模型文件存储在上述path地址:https://paddlejs.cdn.bcebos.com/models/mobileNetV2
我自己转化了一个模型后,采用百度云的BOS对象存储服务后的地址:https://paddlejs2.bj.bcebos.com/tianjiao_mobilenetv2
为啥替换后不能正常访问呢?(本地测模型没有问题,应该就是模型地址加载的问题,求大佬帮忙看看!

Paddle.js yolo例子缺少图片文件

启动服务后报错:

(base) ubuntu@ip-172-31-23-98:~/github/Paddle.js/examples/yolo$ npm run yolo

[email protected] yolo /home/ubuntu/github/Paddle.js
parcel ./examples/yolo/index.html

Server running at http://localhost:1234
🚨 /home/ubuntu/github/Paddle.js/examples/yolo/pic.png: ENOENT: no such file or directory, open '/home/ubuntu/github/Paddle.js/examples/yolo/pic.png'
Error: ENOENT: no such file or directory, open '/home/ubuntu/github/Paddle.js/examples/yolo/pic.png'

浏览器里打开xxxx:1234 网页,也是报错信息:

/home/ubuntu/github/Paddle.js/examples/yolo/pic.png: ENOENT: no such file or directory, open '/home/ubuntu/github/Paddle.js/examples/yolo/pic.png'

continuous performance degradation when running tiny_pose

Hi again,

I've successfully converted pp tiny_pose (256 & 128 versions) to paddle.js env, & all works out smoothly.

However, after a while (10~20s) the predict function becomes slower & slower, from (initially) 50ms to 200ms & so on.

To reproduce the effect please check out: (replace mobilenet example folder with this & do webpack.)

(http-server -o and go to 127.0.0.1:8080, as this requests your camera & needs to run on localhost to avoid https policy)

tiny_pose.zip

The program is running in chrome, with a 2019 macbook pro with dGPU, paddle.js webgl backend.

Although in the end, this issue may closely tied to the converted model, I still think it would be helpful for the community to provide some insights on how to deal with this during model adaptation process.

Thanks!

希望能给微信小程序上的demo项目

我已经搞定单张图片的,但过程还是有些艰辛.或者开发人员出一个demo的repo,单独就是小程序端部署的?
另外啥时候能出一个小程序端视频流的方案呢?想做的工作还是偏视频方向的

ocr demo 多次调用崩溃问题

如题,使用 2.2.0 版本的 ocr demo 在经过多次选取图片进行识别后会出现运行时错误。报错内容如下:

WechatIMG784

图片尺寸 450*1000 大小 60KB ,调用次数 17 次左右出现。

初步定位发现在cv操作Mat数据时发生,具体原因不清楚。

小程序调用转换为js可用的模型调用出错

2.1版本的转换工具得到的模型,加载会报错TypeError: Cannot read property 'length" of undefined
2.0版本的转换工具得到的模型,加载会报错Unhandled promise rejection TypeError: Reduce of empty array with no initial value
前一个问题今年二月就有人提了issues,请问这个有没有什么进展,我该如何解决

无法修改模型输出尺寸

如题,基于ocr_det demo 模型初始化时包含入参如下:

    detectRunner = new Runner({
        modelPath: './models',
        feedShape: {
            fw: 960,
            fh: 960
        },
        fill: '#fff',
        mean: [0.485, 0.456, 0.406],
        std: [0.229, 0.224, 0.225],
        bgr: true,
    });
    await detectRunner.init();

但在实际推理时模型输出尺寸为409600,即640*640,推理代码:

    ctx.drawImage(image, x, y, sw, sh);
    const outsDict = await detectRunner.predict(canvas);

image为需要转换的图像,ctx来自canvas画布,x, y, sw, sh 为转换后尺寸。 outsDict 长度恒为 409600, 并不符合预期。

引入paddlejs-backend-cpu回报找不到错误

您好,正在用Vue.js 开发Web应用的时候,想引用Paddle.js的CPU版本然后发现会报错。

screenshot2

测试了一下然后发现稍微改一下引用就可以了 (后面加上/lib)

screenshot3

我感觉在npm package里面,引入链接有BUG? 麻烦了,谢谢

HTMLVideoElement is not defined

代码都跟开源的一样,但是同一台手机,原版的正常识别,但是我的就是报错,不知道为什么
534af91fc0b91597393f88ee683cebb

PaddleX模型转化后加载错误

测试使用的是paddleX果蔬数据集,训练mobileNetV2模型,本地发布后使用python可以正常预测。
使用convertToPaddleJSModel.py工具转化结果如下:
(paddle) D:\paddle\PaddleX\ModelConverter>convertToPaddleJSModel.py --modelPath=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__model__ --paramPath=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__params__ --outputDir=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\js_model ============Convert Model Args============= modelPath: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__model__ paramPath: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__params__ outputDir: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\js_model enableOptimizeModel: 0 enableLogModelInfo: 0 sliceDataSize:4096 Starting... �[33mYou choosed not to optimize model, consequently, optimizing model is skiped.�[0m Converting model... Organizing model operators info... Organizing model operators info successfully. Organizing model variables info... Organizing model variables info successfully. Model chunkNum set successfully. Dumping model structure to json file... Dumping model structure to json file successfully Output No.1 binary file, remain 1218246 param values. Output No.2 binary file, remain 169670 param values. Output No.3 binary file, remain 0 param values. Slicing data to binary files successfully. (3 output files and 2266822 param values) Converting model successfully. ============ALL DONE============
使用paddle.js进行模型测试时,发现如下错误:
image
再次尝试convertToPaddleJSModel.py增加参数--optimize=1进行优化(paddlelite==2.6.1):
(paddle) D:\paddle\PaddleX\ModelConverter>convertToPaddleJSModel.py --modelPath=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__model__ --paramPath=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__params__ --outputDir=D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\js_model --optimize=1 ============Convert Model Args============= modelPath: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__model__ paramPath: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\inference_model\__params__ outputDir: D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\js_model enableOptimizeModel: 1 enableLogModelInfo: 0 sliceDataSize:4096 Starting... Optimizing model... WARNING: Logging before InitGoogleLogging() is written to STDERR I0802 18:42:33.895296 10456 cxx_api.cc:251] Load model from file. I0802 18:42:33.941295 10456 optimizer.h:202] == Running pass: lite_conv_elementwise_fuse_pass I0802 18:42:33.948297 10456 optimizer.h:219] == Finished running: lite_conv_elementwise_fuse_pass I0802 18:42:33.948297 10456 optimizer.h:202] == Running pass: lite_conv_bn_fuse_pass I0802 18:42:34.001297 10456 pattern_matcher.cc:108] detected 36 subgraph I0802 18:42:34.014297 10456 pattern_matcher.cc:108] detected 17 subgraph I0802 18:42:34.015297 10456 optimizer.h:219] == Finished running: lite_conv_bn_fuse_pass I0802 18:42:34.015297 10456 optimizer.h:202] == Running pass: lite_conv_elementwise_fuse_pass I0802 18:42:34.024296 10456 optimizer.h:219] == Finished running: lite_conv_elementwise_fuse_pass I0802 18:42:34.024296 10456 optimizer.h:202] == Running pass: lite_conv_activation_fuse_pass I0802 18:42:34.034297 10456 pattern_matcher.cc:108] detected 19 subgraph I0802 18:42:34.046296 10456 pattern_matcher.cc:108] detected 17 subgraph I0802 18:42:34.053342 10456 optimizer.h:219] == Finished running: lite_conv_activation_fuse_pass I0802 18:42:34.054306 10456 optimizer.h:202] == Running pass: lite_var_conv_2d_activation_fuse_pass I0802 18:42:34.054306 10456 optimizer.h:215] - Skip lite_var_conv_2d_activation_fuse_pass because the target or kernel does not match. I0802 18:42:34.055296 10456 optimizer.h:202] == Running pass: lite_fc_fuse_pass I0802 18:42:34.055296 10456 pattern_matcher.cc:108] detected 1 subgraph I0802 18:42:34.056308 10456 optimizer.h:219] == Finished running: lite_fc_fuse_pass I0802 18:42:34.056308 10456 optimizer.h:202] == Running pass: lite_shuffle_channel_fuse_pass I0802 18:42:34.056308 10456 optimizer.h:219] == Finished running: lite_shuffle_channel_fuse_pass I0802 18:42:34.056308 10456 optimizer.h:202] == Running pass: lite_transpose_softmax_transpose_fuse_pass I0802 18:42:34.057299 10456 optimizer.h:219] == Finished running: lite_transpose_softmax_transpose_fuse_pass I0802 18:42:34.058316 10456 optimizer.h:202] == Running pass: lite_interpolate_fuse_pass I0802 18:42:34.058316 10456 optimizer.h:219] == Finished running: lite_interpolate_fuse_pass I0802 18:42:34.058316 10456 optimizer.h:202] == Running pass: identity_scale_eliminate_pass I0802 18:42:34.059303 10456 pattern_matcher.cc:108] detected 1 subgraph I0802 18:42:34.059303 10456 optimizer.h:219] == Finished running: identity_scale_eliminate_pass I0802 18:42:34.059303 10456 optimizer.h:202] == Running pass: elementwise_mul_constant_eliminate_pass I0802 18:42:34.059303 10456 optimizer.h:219] == Finished running: elementwise_mul_constant_eliminate_pass I0802 18:42:34.060302 10456 optimizer.h:202] == Running pass: lite_sequence_pool_concat_fuse_pass I0802 18:42:34.060302 10456 optimizer.h:215] - Skip lite_sequence_pool_concat_fuse_pass because the target or kernel does not match. I0802 18:42:34.060302 10456 optimizer.h:202] == Running pass: lite_elementwise_add_activation_fuse_pass I0802 18:42:34.060302 10456 optimizer.h:219] == Finished running: lite_elementwise_add_activation_fuse_pass I0802 18:42:34.061324 10456 optimizer.h:202] == Running pass: static_kernel_pick_pass I0802 18:42:34.063328 10456 optimizer.h:219] == Finished running: static_kernel_pick_pass I0802 18:42:34.063328 10456 optimizer.h:202] == Running pass: variable_place_inference_pass I0802 18:42:34.064328 10456 optimizer.h:219] == Finished running: variable_place_inference_pass I0802 18:42:34.064328 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.064328 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.064328 10456 optimizer.h:202] == Running pass: type_target_cast_pass I0802 18:42:34.065297 10456 optimizer.h:219] == Finished running: type_target_cast_pass I0802 18:42:34.065297 10456 optimizer.h:202] == Running pass: variable_place_inference_pass I0802 18:42:34.066332 10456 optimizer.h:219] == Finished running: variable_place_inference_pass I0802 18:42:34.066332 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.066332 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.066332 10456 optimizer.h:202] == Running pass: io_copy_kernel_pick_pass I0802 18:42:34.067297 10456 optimizer.h:219] == Finished running: io_copy_kernel_pick_pass I0802 18:42:34.069296 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.070310 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.070310 10456 optimizer.h:202] == Running pass: variable_place_inference_pass I0802 18:42:34.071297 10456 optimizer.h:219] == Finished running: variable_place_inference_pass I0802 18:42:34.071297 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.071297 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.071297 10456 optimizer.h:202] == Running pass: type_precision_cast_pass I0802 18:42:34.072297 10456 optimizer.h:219] == Finished running: type_precision_cast_pass I0802 18:42:34.072297 10456 optimizer.h:202] == Running pass: variable_place_inference_pass I0802 18:42:34.073297 10456 optimizer.h:219] == Finished running: variable_place_inference_pass I0802 18:42:34.073297 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.073297 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.073297 10456 optimizer.h:202] == Running pass: type_layout_cast_pass I0802 18:42:34.074297 10456 optimizer.h:219] == Finished running: type_layout_cast_pass I0802 18:42:34.074297 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.074297 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.074297 10456 optimizer.h:202] == Running pass: variable_place_inference_pass I0802 18:42:34.075297 10456 optimizer.h:219] == Finished running: variable_place_inference_pass I0802 18:42:34.075297 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.075297 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.075297 10456 optimizer.h:202] == Running pass: runtime_context_assign_pass I0802 18:42:34.075297 10456 optimizer.h:219] == Finished running: runtime_context_assign_pass I0802 18:42:34.075297 10456 optimizer.h:202] == Running pass: argument_type_display_pass I0802 18:42:34.075297 10456 optimizer.h:219] == Finished running: argument_type_display_pass I0802 18:42:34.076297 10456 generate_program_pass.h:37] insts.size 68 Save the optimized model into :D:\paddle\PaddleX\WorkSpace\P0001-T0001_export_model\js_model\optimizesuccessfully Optimizing model successfully. Converting model... Organizing model operators info... Organizing model operators info successfully. Organizing model variables info... Organizing model variables info successfully. Model chunkNum set successfully. Dumping model structure to json file... Dumping model structure to json file successfully Output No.1 binary file, remain 1166982 param values. Output No.2 binary file, remain 118406 param values. Output No.3 binary file, remain 0 param values. Slicing data to binary files successfully. (3 output files and 2215558 param values) Converting model successfully. Cleaning optimized temporary model... Temporary files has been deleted successfully. ============ALL DONE============
使用paddle.js进行模型测试时,发现如下错误:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.