Advanced driver-assistance system
- Google Coral Edge TPU Dev Board / USB Accelerator
- Intel Movidius NCS (neural compute stick), Myriad 2/X VPU
- Gyrfalcon 2801 Neural Accelerator
- NVIDIA Jetson Nano
- VIM3 / AML NPU
Advanced driver-assistance system with Google Coral Edge TPU Dev Board / USB Accelerator, Intel Movidius NCS (neural compute stick), Myriad 2/X VPU, Gyrfalcon 2801 Neural Accelerator, NVIDIA Jetson Nano and Khadas VIM3
License: GNU General Public License v3.0
Hey! Cool project. I've extended it to do MJPEG streaming with target drawing etc. I've also added sub-image sampling so that inference is run repeatedly over slightly overlapping 300x300 images within the original image. This is so that targets are detected much farther away. It is actually faster too because scaling down takes so long, relatively.
Anyway, with 720p as an example, I discovered that you can get a 8x+ fps speedup just by using the CAP_V4L codec.
Replace:
cap = cv::VideoCapture(camera_device);
with:
cap = cv::VideoCapture(camera_device, CAP_V4L);
๐
Do you know how to get servo/PWM output working? There are some badass things that can be done if we can get the coral dev board to do servo/PWM output.
Hey @larrylart, During compilation of he code on VIM3, did you do it using cross compiler? If so, does this demo work using cross compiler?
I'm trying to build this project on the coral dev board. I was able to build opencv4.1, but not tensorflow, or this project.
Issues:
sh ./build_coral_lib.sh
which failed on line 19 (before the line that was modified in your instructions): ./build_coral_lib.sh: 19: ./build_coral_lib.sh: Bad substitution
. Full error/usr/bin/ld: cannot find -ledgetpu
Full errorDo you know how to solve either issue?
run aml_object_detect
got the Memory leak; in vsi_nn_RunGraph
I'm using CLion with remote development set up to automatically push code the coral dev board (CDB). CLion also references the libraries on the CDB (instead of whatever libraries may or may not be installed on my desktop). It looks like you had tensorflow checked out in the project directory during development. I would like to remove that dependency and instead set it up so that CLion resolves tensorflow-lite and edgetpu objects in the provided static and shared libraries.
What would all instances of #include "tensorflow/lite/..."
need to be replaced with to get CLion, or any other IDE, to reference and resolve tensorflow objects to lib/libtensorflow-lite_aarch64.a
instead of a locally checked out tensorflow source directory?
(I realize that the makefile is already utilizing the static and shared libraries, but if the IDE can't resolve tensorflow objects without (a possibly different version of) tensorflow checked out locally, then that adds friction to development.)
I followed the instructions as in the README. Once the inference ran
pAMLWorker->GetInference( vectMatchResult );
There are no inference of any COCO objects since
vectMatchResult.size()
is always zero.
pAMLWorker->m_inferenceTime
is around 0.03. So I don't understand why no objects are detected.
However, I am using Ubuntu 20.04 with the latest image and NPU SDK has some updates over the past few months (and i adapted to use OpenCV 4.5). Am I missing something?
Hey @larrylart, During compilation of the code on VIM3, did you do it using cross compiler? If so, does this demo work using cross compiler?
I'm trying to build this sample code on my XU-4 running Mate 16.04 and Mate 18.04 to help evaluate a potential performance regression in Mate18 vs. Mate16 that I uncovered running an OpenVINO C++ example code for the Movidius NCS2.
I get the same error on both systems.
doing: ./tools/make/build_rpi_lib.sh
I get this error:
In file included from ./tensorflow/lite/core/api/op_resolver.h:20:0,
from ./tensorflow/lite/core/api/flatbuffer_conversions.h:24,
from tensorflow/lite/core/api/flatbuffer_conversions.cc:16:
./tensorflow/lite/schema/schema_generated.h:21:37: fatal error: flatbuffers/flatbuffers.h: No such file or directory
I've no idea what package flatbuffers.h belongs to :(
but I doubt its the only missing dependency.
The idea is that if the Coral code doesn't have the performance decrement, the problem is likely in OpenVINO as its not "officially" supported on 18.04 at present.
Python samples using Coral and Movidius show differences within the run to run repeat variance of the code, although Mate16 is on average ~0.5 fps higher, although I won't claim statistical significance.
Gist of the performance decrement:
There appears to be a performance regression where Mate18 is significantly worse than the Mate16 system, remember this is C++ code, not Python. I get the following results from the sample code:
Odroid XU-4 Mate16
NCS: 8.22 fps
NCS2: 11.5 fps
Looks to be performance regression on Mate18 vs. Mate16!
Odroid XU-4 Mate18
NCS: 6.56 fps
NCS2: 8.36 fps
Raspberry Pi3B:
NCS: 6.93 fps
NCS2: 8.58 fps
Basically Mate18 is a bit worse than a Pi3B here!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.