Coder Social home page Coder Social logo

xboot / libonnx Goto Github PK

View Code? Open in Web Editor NEW
564.0 27.0 103.0 91.85 MB

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

License: MIT License

Makefile 0.29% C 98.94% Python 0.77%
onnx inference c embedded baremetal library deep-learning embedded-systems portable lightweight dedeep-neural-networks neural-network machine-learning hardware-acceleration ai

libonnx's Introduction


Libonnx

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Getting Started

The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated struct onnx_context_t * and you can pass an array of struct resolver_t * for hardware acceleration.

The filename is path to the format of onnx model.

struct onnx_context_t * ctx = onnx_context_alloc_from_file(filename, NULL, 0);

Then, you can get input and output tensor using onnx_tensor_search function.

struct onnx_tensor_t * input = onnx_tensor_search(ctx, "input-tensor-name");
struct onnx_tensor_t * output = onnx_tensor_search(ctx, "output-tensor-name");

When the input tensor has been setting, you can run inference engine using onnx_run function and the result will putting into the output tensor.

onnx_run(ctx);

Finally, you must free struct onnx_context_t * using onnx_context_free function.

onnx_context_free(ctx);

Compilation Instructions

Just type make at the root directory, you will see a static library and some binary of examples and tests for usage.

cd libonnx
make

To compile the mnist example, you will have to install SDL2 and SDL2 GFX. On systems like Ubuntu run

    apt-get install libsdl2-dev libsdl2-gfx-dev

to install the required Simple DirectMedia Layer libraries to run the GUI.

Cross compilation example (for arm64)

Run make CROSS_COMPILE=path/to/toolchains/aarch64-linux-gnu- at the root directory to compile all libraries, tests and examples for the platform.

Change CROSS_COMPILE to point the toolchains that you plan to use.

How to run examples

After compiling all the files, you can run an example by using:

cd libonnx/examples/hello
./hello

Screenshots

Running tests

To run tests, for example on those in the tests/model folder use:

cd libonnx/tests/
./tests model

Here is the output:

[mnist_8](test_data_set_0)                                                              [OKAY]
[mnist_8](test_data_set_1)                                                              [OKAY]
[mnist_8](test_data_set_2)                                                              [OKAY]
[mobilenet_v2_7](test_data_set_0)                                                       [OKAY]
[mobilenet_v2_7](test_data_set_1)                                                       [OKAY]
[mobilenet_v2_7](test_data_set_2)                                                       [OKAY]
[shufflenet_v1_9](test_data_set_0)                                                      [OKAY]
[shufflenet_v1_9](test_data_set_1)                                                      [OKAY]
[shufflenet_v1_9](test_data_set_2)                                                      [OKAY]
[squeezenet_v11_7](test_data_set_0)                                                     [OKAY]
[squeezenet_v11_7](test_data_set_1)                                                     [OKAY]
[squeezenet_v11_7](test_data_set_2)                                                     [OKAY]
[super_resolution_10](test_data_set_0)                                                  [OKAY]
[tinyyolo_v2_8](test_data_set_0)                                                        [OKAY]
[tinyyolo_v2_8](test_data_set_1)                                                        [OKAY]
[tinyyolo_v2_8](test_data_set_2)                                                        [OKAY]

Note that running the test on the other folders may not succeed. Some operators have not been implemented, look bat the notes section for more info.

Notes

  • This library based on the onnx version 1.9.1 with the newest opset 14 support. The supported operator table in the documents directory.
  • Checkout the tools folder for help with ONNX model files.
  • You can use xxd -i <filename.onnx> (on Linux) to convert your onnx model into a unsigned char array and then use the function onnx_context_alloc to use it. This is how the models are loaded in the examples - hello and mnist.

Links

License

This library is free software; you can redistribute it and or modify it under the terms of the MIT license. See MIT License for details.

libonnx's People

Contributors

beru avatar jianjunjiang avatar reinforce-ii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libonnx's Issues

Which header files are necessary?

Hello!
I am working on a project where I have to deploy an inference engine in a very minimal environment where even the math library is not present. While compiling (onnxconf.h) for the platform I found out that the math.h header file was missing in the platform libraries. Here are some of the headers being used in that header file - onnxconf.h:

#include <stdio.h>
#include <stdlib.h>  
#include <stdint.h>  
#include <stddef.h>  
#include <string.h>  
#include <malloc.h>  
#include <float.h>  
#include <math.h>  
#include <list.h>  
#include <hmap.h>  

Also I have found that in this header file no math function has been used - I hope I am write on this! Correct me if I am wrong
So I was wondering if I could just remove this inclusion of math.h and still have a functional inference engine?

Just to make things simple compiling a math library would not be desirable but is possible.

Support GRU?

I find that the GRU.c is nearly empty. The libonnx doesn't support GRU now, does it?

Implementation of Upsamle function

Hi,

In src/default folder, we could found the implementation C code of most operators(e.g. conv, matmul and etc. ), however, we could not found the implementation C code of Upsamle.

where is the implementation C code of Upsample?

Hello model RAM size required

Hi,

I'm trying to run the hello example on a small embedded system but im unsure of the memory required to allocate this model ( when running onnx_context_alloc).

I have roughly 2MB, is that enough?
Is there a smaller model that I can test with the model defined as a const char array?
Like the static const unsigned char mnist_onnx[] = { ... }

Maxpool + dilation

This is really a question, I don't think there is a bug here, just something I'm not understanding.

I'm looking at the code for maxpool and how it handles dilations. The spec has this example:

"""
input_shape: [1, 1, 4, 4]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
    'MaxPool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
    strides=[1, 1],
    dilations=[2, 2]
)
x = np.array([[[
    [1, 2, 3, 4],
    [5, 6, 7, 8],
    [9, 10, 11, 12],
    [13, 14, 15, 16],
]]]).astype(np.float32)
y = np.array([[[
    [11, 12],
    [15, 16]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_maxpool_2d_dilations')

This should implicitly use AUTO_PAD_NOTSET. Now what I tried is getting the MaxPool_float32 to give the [ 11, 12, 15, 16 ] result by hardcoding the inputs, for the full code + output see this godbolt:

int strides[] = { 1, 1 };
int kernels[] = { 2, 2 };
int cpads[] = { 0, 0, 0, 0 };

int x_ndim = 4;
int x_dims[] = { 1, 1, 4, 4 };
int y_dims[] = { 1, 1, 2, 2 };

From my code reading, the dilation is only used to determine the output dimensions, which I've hardcoded here.

But with these inputs I get the incorrect output:

6.000000 7.000000 10.000000 11.000000

So, what is the way that dilations influence the end result that I am missing?

How to convert a `onnx_context` to a `unsigned char array` as in the hello world example?

In the main.c file in the examples/hello folder, how did you convert the MNSIT model to a unsigned char array and use it?:

#include <onnx.h>

static const unsigned char mnist_onnx[] = {
	0x08, 0x03, 0x12, 0x04, 0x43, 0x4e, 0x54, 0x4b, 0x1a, 0x05, 0x32, 0x2e,
	0x35, 0x2e, 0x31, 0x22, 0x07, 0x61, 0x69, 0x2e, 0x63, 0x6e, 0x74, 0x6b,
	0x28, 0x01, 0x3a, 0xb2, 0xce, 0x01, 0x0a, 0x62, 0x0a, 0x0c, 0x50, 0x61,
	0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x31, 0x39, 0x33, 0x0a, 0x1b,
	0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x31, 0x39, 0x33,

If you have any code to do it can you share it?

Python bindings

It'd be nice to have python bindings, since onnxruntime by Micro$oft has telemetry and so it is a bit unethical to depend on it. Fortunately there can be a thin abstraction layer.

To maximize compatibility to various python impls and to spare users from compiling the lib themselves it may make sense to implement it via ctypes. There are packages generating ctypes bindings from headers automatically, but usually they need extensive postprocessing.

Why does every test in the model, node and simple folders fail?

I have compiled libonnx on a fresh installation of Ubuntu - installed all the prerequisites ran just make and tried running the tests one by one. But I find the every test that I run fails and I am not able to figure out why.

Here is what I did:

  • Installed the latest LTS release of Ubuntu
  • Installed make, build-essential, git and libsdl2-gfx (Did do other stuff but those would not mess with this)
  • Ran make all to compile
  • Ran the tests on many examples: ./tests ./model/mnist_8/
  • But every test that I have tried has just failed!
$ ./TESTING/libonnx/tests/tests ./TESTING/libonnx/tests/model/mnist_8/
[test_data_set_0]                                                                       [FAIL]
[test_data_set_1]                                                                       [FAIL]
[test_data_set_2]                                                                       [FAIL]
  • All those in the simple and model folders fail but those in pytorch-* succeed partially
  • Is this because of a missing operator? (no Unsupported opset message has been displayed as in the pytorch tests)
  • Nonetheless the example for handwriting recognition has identified the number correctly most of the time

Valgrind output for Yolo v2 model

I downloaded tiny yolo v2 model from https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/tiny-yolov2
And when inferencing this, I got those outputs from Valgrind

==178736== Invalid read of size 1
==178736== at 0x162DF9: shash (onnxconf.h:146)
==178736== by 0x162F11: MaxPool_init (MaxPool.c:38)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid write of size 1
==178736== at 0x1154F1: onnx_attribute_read_string (onnx.c:1747)
==178736== by 0x162F09: MaxPool_init (MaxPool.c:38)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid read of size 1
==178736== at 0x13BEB8: shash (onnxconf.h:146)
==178736== by 0x13BFD1: Conv_init (Conv.c:43)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid write of size 1
==178736== at 0x1154F1: onnx_attribute_read_string (onnx.c:1747)
==178736== by 0x13BFC9: Conv_init (Conv.c:43)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== ERROR SUMMARY: 30 errors from 4 contexts (suppressed: 0 from 0)

Failed to load 'yolov5n.onnx'.

hi,
I try to load 'yolov5n.onnx' like this:

#include "onnx.h"

int main(void)
{
    struct onnx_context_t *sess = onnx_context_alloc_from_file("yolov5n.onnx", NULL, 0);
    onnx_context_dump(sess, 1);
    return 0;
}

but nothing was output, including warnings and errors.

Tensorflow model with opset 12 seems to crash when loaded

I have a model converted from Tensorflow that uses opset 12. (using tf2onnx.convert)
The model opens fine in Netron and elsewhere but crashes somewhere in Concat_reshape when I try to load it with onnx_context_alloc_from_file. I tried compiling for both x86 and x64 with the same result.

Here are the model properties as viewed through Netron:
image

Opening the models supplied in the libonnx test directory seemed to work fine. Do you have any suggestions for how to get this working? Thanks.

This project need SDL2.

At first , I got error when complie this project:
main.c:1:10: fatal error: SDL2/SDL.h: 没有那个文件或目录
1 | #include <SDL2/SDL.h>
| ^~~~~~~~~~~~
then I use sudo apt-get install libsdl2-gfx-dev fixed this error.

I think this project should explain this dependence.

THANKS!

Can this software run on MacOS

I came across an error when compling it on my Mac.

[CC] helper.c
In file included from helper.c:28:
./helper.h:13:10: fatal error: 'malloc.h' file not found
#include <malloc.h>
^~~~~~~~~~
1 error generated.
make[1]: *** [helper.o] Error 1
make: *** [all] Error 2

Running test/model/test_mnist_8 issue

Hi,

When I run the test/model/test_mnist_8 once it works and I get a OKAY result.
I then re-run it and it FAILS.

Any suggestion why this might be and what to look for?

isnan and isinf issue

Hi,

When I compile I get the following error with clang. I then try to link the library and it complains. Not quite sure why, I added in the main.c isnan and isinf and compiles ok. -lm is added to the linker.

Library compilation:
default/IsNaN.c:34:11: warning: implicit declaration of function 'isnanf' is invalid in C99 [-Wimplicit-function-declaration] py[i] = isnanf(v) ? 1 : 0;

Linker:
libonnx.a(.text+0x598): undefined reference to isnanf'

Unsupported opset => Gather-11 (ai.onnx)

IR Version: v6
Producer: pytorch 1.11.0
Domain:
Imports:
ai.onnx v11
Conv_0: Conv-11 (ai.onnx)
Inputs:
input.1: float32[1 x 3 x 352 x 352] = [...]
onnx::Conv_760: float32[24 x 3 x 3 x 3] = [...]
onnx::Conv_761: float32[24] = [...]
Outputs:
input.4: float32[1 x 24 x 176 x 176] = [...]

...

Concat_264: Concat-11 (ai.onnx)
Inputs:
onnx::Concat_744: float32[1 x 1 x 22 x 22] = [...]
onnx::Concat_960: float32[1 x 4 x 22 x 22] = [...]
onnx::Concat_757: float32[1 x 80 x 22 x 22] = [...]
Outputs:
758: float32[1 x 85 x 22 x 22] = [...]
110592
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Resize-11 (ai.onnx)
Unsupported opset => Pad-11 (ai.onnx)

Issue:
Using https://github.com/dog-qiuqiu/FastestDet Fastest yolo, unsupported opset. Onnx model in /FastestDet/example/onnx-runtime.

算子不支持,在yolotiny测试中没问题,采用另一种轻量级模型出现该问题,想问下老哥是算子实现的问题还是别的。

Fail to compile on macOs

Hi, there.

I came across this project and try to compile it on MacOs. But fail with the following error.

main.c:90:47: warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long') [-Wformat]
printf("%-32s %ld %12.3f(us)\r\n", e->key, p->count, (p->count > 0) ? ((double)p->elapsed / 1000.0f) / (double)p->count : 0);
~~~ ^~~~~~~~
%llu
1 warning generated.
[LD] Linking benchmark
ld: library not found for -lcrt0.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [benchmark] Error 1
make[1]: *** [all] Error 2
make: *** [all] Error 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.