Coder Social home page Coder Social logo

cansik / deep-vision-processing Goto Github PK

View Code? Open in Web Editor NEW
87.0 5.0 21.0 12.75 MB

Deep computer-vision algorithms for the Processing framework.

Shell 0.04% Java 99.96%
deep-neural-networks computer-vision processing pose-estimation machine-learning classification inference-engine cuda-support

deep-vision-processing's Introduction

Deep Vision Processing Build

Deep computer-vision algorithms for Processing.

The idea behind this library is to provide a simple way to use (inference) machine learning algorithms for computer vision tasks inside Processing. Mainly portability and easy-to-use are the primary goals of this library. Starting with version 0.6.0 CUDA inferencing support is built into the library (Windows & Linux).

Caution: The API is still in development and can change at any time.

Pose

Lightweight OpenPose Example

Install

It is recommended to use the contribution manager in the Processing app to install the library.

image

Manual

Download the latest prebuilt version from the release sections and install it into your Processing library folder.

Usage

The base of the library is the DeepVision class. It is used to download the pretrained models and create new networks.

import ch.bildspur.vision.*;
import ch.bildspur.vision.network.*;
import ch.bildspur.vision.result.*;

DeepVision vision = new DeepVision(this);

Usually it makes sense to define the network globally for your sketch and create it in setup. The create method downloads the pre-trained weights if they are not already existing. The network first has to be created and then be setup.

YOLONetwork network;

void setup() {
  // create the network & download the pre-trained models
  network = vision.createYOLOv3();

  // load the model
  network.setup();
  
  // set network settings (optional)
  network.setConfidenceThreshold(0.2f);
  
  ...
}

By default, the weights are stored in the library folder of Processing. If you want to download them to the sketch folder, use the following command:

// download to library folder
vision.storeNetworksGlobal();

// download to sketch/networks
vision.storeNetworksInSketch();

Each network has a run() method, which takes an image as a parameter and outputs a result. You can just throw in any PImage and the library starts processing it.

PImage myImg = loadImage("hello.jpg");
ResultList<ObjectDetectionResult> detections = network.run(myImg);

Please have a look at the specific networks for further information or at the examples.

OpenCL Backend Support

With version 0.8.1 by default if OpenCL is enabled, it will be used as backend. If CUDA is enabled too, CUDA will be preferred. It is possible to force the CPU backend by setting the following option:

DeepVision vision = new DeepVision(this);
vision.setUseDefaultBackend(true);

CUDA Backend Support

With version 0.6.0 it is possible to download the CUDA bundled libraries. This enables to run most of the DNN's on CUDA enabled graphics cards. For most networks this is necessary to run them in real-time. If you have the cuda-bundled version installed and run deep-vision on a Linux or Windows with an NVIDIA graphics card, you are able to enable the CUDA backend:

// Second parameter (enableCUDABackend) enables CUDA
DeepVision vision = new DeepVision(this, true);

If the second parameter is unset, the library will check if a CUDA enabled device is available and enables the backend likewise. It is possible to check if CUDA backend has been enabled by the following method:

println("Is CUDA Enabled: " + vision.isCUDABackendEnabled());

If CUDA is enabled but the hardware does not support it, Processing will show you a warning and run the networks on the CPU.

Networks

Here you find a list of implemented networks:

  • Object Detection ✨
    • YOLOv3-tiny
    • YOLOv3-tiny-prn
    • EfficientNetB0-YOLOv3
    • YOLOv3 OpenImages Dataset
    • YOLOv3-spp (spatial pyramid pooling)
    • YOLOv3
    • YOLOv4
    • YOLOv4-tiny
    • YOLOv5 (n, s, m, l, x)
    • YOLO Fastest & XL
    • SSDMobileNetV2
    • Handtracking based on SSDMobileNetV2
    • TextBoxes
    • Ultra-Light-Fast-Generic-Face-Detector-1MB RFB (~30 FPS on CPU)
    • Ultra-Light-Fast-Generic-Face-Detector-1MB Slim (~40 FPS on CPU)
    • Cascade Classifier
  • Object Segmentation
    • Mask R-CNN
  • Object Recognition 🚙
    • Tesseract LSTM
  • Keypoint Detection 🤾🏻‍♀️
    • Facial Landmark Detection
    • Single Human Pose Detection based on lightweight openpose
  • Classification 🐈
    • MNIST CNN
    • FER+ Emotion
    • Age Net
    • Gender Net
  • Depth Estimation 🕶
    • MidasNet
  • Image Processing
    • Style Transfer
    • Multiple Networks for x2 x3 x4 Superresolution

The following list shows the networks that are on the list to be implemented (⚡️ already in progress):

  • YOLO 9K (not supported by OpenCV)
  • Multi Human Pose Detection ⚡️ (currently struggling with the partial affinity fields 🤷🏻‍♂️ help?)
  • TextBoxes++ ⚡️
  • CRNN ⚡️
  • PixelLink

Object Detection

Locating one or multiple predefined objects in an image is the task of the object detection networks.

YOLO

YOLO Example

The result of these networks is usually a list of ObjectDetectionResult.

ObjectDetectionNetwork net = vision.createYOLOv3();
net.setup();

// detect new objects
ResultList<ObjectDetectionResult> detections = net.run(image);

for (ObjectDetectionResult detection : detections) {
    println(detection.getClassName() + "\t[" + detection.getConfidence() + "]");
}

Every object detection result contains the following fields:

  • getClassId() - id of the class the object belongs to
  • getClassName() - name of the class the object belongs to
  • getConfidence() - how confident the network is on this detection
  • getX() - x position of the bounding box
  • getY() - y position of the bounding box
  • getWidth() - width of the bounding box
  • getHeight() - height of the bounding box

YOLO [Paper]

YOLO a very fast and accurate single shot network. The pre-trained model is trained on the 80 classes COCO dataset. There are three different weights & models available in the repository:

  • YOLOv3-tiny (very fast, but trading performance for accuracy)
  • YOLOv3-spp (original model using spatial pyramid pooling)
  • YOLOv3 (608)
  • YOLOv4 (608)
  • YOLOv4-tiny (416)
  • YOLOv5n (640)
  • YOLOv5s (640)
  • YOLOv5m (640)
  • YOLOv5l (640)
  • YOLOv5x (640)
// setup the network
YOLONetwork net = vision.createYOLOv4();
YOLONetwork net = vision.createYOLOv4Tiny();
YOLONetwork net = vision.createYOLOv3();
YOLONetwork net = vision.createYOLOv3SPP();
YOLONetwork net = vision.createYOLOv3Tiny();
YOLONetwork net = vision.createYOLOv5n();
YOLONetwork net = vision.createYOLOv5s();
YOLONetwork net = vision.createYOLOv5m();
YOLONetwork net = vision.createYOLOv5l();
YOLONetwork net = vision.createYOLOv5x();

// set confidence threshold
net.setConfidenceThreshold(0.2f);

YOLOv5

Since version 0.9.0 YOLOv5 is implemented as well. It uses the pre-trained models converted into the ONNX format. At the moment YOLOv5 does not work well with the implemented NMS. To adjust the settings of the NMS use the following functions.

// set confidence threshold
net.setConfidenceThreshold(0.2f);

// set confidence threshold
net.set(0.2f);

// set the IoU threshold (overlapping of the bounding boxes)
net.setNmsThreshold(0.4f);

// set how many objects should be taken into account for nms
// 0 means all objects
net.setTopK(100);

SSDMobileNetV2 [Paper]

This network is a single shot detector based on the mobilenetv2 architecture. It is pre-trained on the 90 classes COCO dataset and is really fast.

SSDMobileNetwork net = vision.createMobileNetV2();

Handtracking [Project]

This is a pre-trained SSD MobilenetV2 network to detect hands.

SSDMobileNetwork net = vision.createHandDetector();

TextBoxes [Paper]

TextBoxes is a scene text detector in the wild based on SSD MobileNet. It is able to detect text in a scene and return its location.

TextBoxesNetwork net = vision.createTextBoxesDetector();

Ultra-Light-Fast-Generic-Face-Detector [Project]

ULFG Face Detector is a very fast CNN based face detector which reaches up to 40 FPS on a MacBook Pro. The face detector comes with four different pre-trained weights:

  • RFB640 & RFB320 - More accurate but slower detector
  • Slim640 & Slim320 - Less accurate but faster detector
ULFGFaceDetectionNetwork net = vision.createULFGFaceDetectorRFB640();
ULFGFaceDetectionNetwork net = vision.createULFGFaceDetectorRFB320();
ULFGFaceDetectionNetwork net = vision.createULFGFaceDetectorSlim640();
ULFGFaceDetectionNetwork net = vision.createULFGFaceDetectorSlim320();

The detector detects only the frontal face part and not the complete head. Most algorithms that run on results of face detections need a rectangular detection shape.

Cascade Classifier [Paper]

The cascade classifier detector is based on boosting and very common as pre-processor for many classifiers.

CascadeClassifierNetwork net = vision.createCascadeFrontalFace();

Object Recognition

tbd

KeyPoint Detection

tbd

Classification

tbd

Depth Estimation

MidasNet

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer

MidasNet

Image Processing

tbd

Pipeline

It is possible to create network pipelines to use for example a face-detection network and different classifier for each face. This is not yet documented so you have to check out the test code: HumanAttributesPipelineTest.java#L36-L41

Build

  • Install JDK 8 (because of Processing) (JDK 11 for Processing 4)

Run gradle to build a new release package under /release/deepvision.zip:

# windows
gradlew.bat releaseProcessingLib

# mac / unix
./gradlew releaseProcessingLib

Cuda Support

To build with CUDA support enable the property cuda:

gradlew.bat releaseProcessingLib -Pcuda -Pdisable-fatjar

This will take several minutes and result in a 5.3 GB folder. disable-fatjar prevents form creating a fatjar, which would be too big to be zipped.

Platform Specific

To build only on a specific platform use the property javacppPlatform:

# builds with support for all platforms
gradlew.bat releaseProcessingLib -PjavacppPlatform=linux-x86_64,macosx-x86_64,macosx-arm64,windows-x86_64,linux-armhf,linux-arm64

FAQ

Why is xy network not implemented?

Please open an issue if you have a cool network that could be implemented or just contribute a PR.

Why is it no possible to train my own network?

The idea was to give artist and makers a simple tool to run networks inside of Processing. To train a network needs a lot of specific knowledge about Neural Networks (CNN in specific).

Of course it is possible to train your own YOLO or SSDMobileNet and use the weights with this library. Check out the following example for detection facemasks: cansik/yolo-mask-detection

Is it compatible with Greg Borensteins OpenCV for Processing?

No, OpenCV for Processing uses the direct OpenCV Java bindings instead of JavaCV. Please only include either one library, because Processing gets confused if two OpenCV packages are imported.

About

Maintained by cansik with the help of the following dependencies:

Stock images from the following peoples have been used:

  • yoga.jpg by Yogendra Singh from Pexels
  • office.jpg by fauxels from Pexels
  • faces.png by shvetsa from Pexels
  • hand.jpg by Thought Catalog on Unsplash
  • sport.jpg by John Torcasio on Unsplash
  • sticker.jpg by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash
  • children.jpg by Sandeep Kr Yadav

deep-vision-processing's People

Contributors

cansik avatar codeanticode avatar giorgosxou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deep-vision-processing's Issues

UnsatisfiedLinkError: no jniopenblas_nolapack

Hi

I'm trying to test deepvision library 0.7.0 with processing 4.05b
on Mac 1M Pro, MacOS 12.2.1

I think javacpp and the dependencies, need to be updated to 1.5.7 to include macOSX-arm64
I'm testing Apple Silicon GPU performance against CUDA (RTX3090)

note I'm using javaFX render (as the openGL issues are on going with 4.04b (Java 17.0.2)and above

A library used by this sketch relies on native code that is not available.
UnsatisfiedLinkError: no jniopenblas_nolapack in java.library.path:

Kind regards

API structure proposal

This is a simplified API structure proposal on how to use the library:

PImage myImage = // something;

// deep vision API => maybe static
DeepVision deepVision = new DeepVision(this);

// create a default yolo v3 tiny 
DeepNeuralNetwork yolo = deepVision.createYolo(DNN.YOLOv3Tiny);

// run network
yolo.apply(myImage);

RuntimeException when running this example: NeuralStyleTransfer

Hi, I encountered RuntimeException when running this example: NeuralStyleTransfer, and the processing console prompt:

creating network...
loading model...
RuntimeException: OpenCV(4.5.3) D:\a\javacpp-presets\javacpp-presets\opencv\cppbuild\windows-x86_64\opencv-4.5.3\modules\dnn\src\torch\THDiskFile.cpp:286: error: (-2:Unspecified error) read error: read 327960 blocks instead of 424899 in function 'TH::THDiskFile_readFloat'

I aware that there wound be some wrong with the opencv dependency in my system (windows 11) , but how to make it right? thanks a lot.

Resolving "java.lang.RuntimeException" (-212:Parsing error)

Solved

if you have this kind of issue:

java.lang.RuntimeException: OpenCV(4.2.0) C:\projects\javacpp-presets\opencv\cppbuild\windows-x86_64\opencv-4.2.0\modules\dnn\src\darknet\darknet_importer.cpp:207: error: (-212:Parsing error) Failed to parse NetParameter file: C:\Users\gxous\Desktop\χου\προγραμματισμός\java\TestDeepVision\TestDeepVision\lib\networks\yolov3.cfg in function 'cv::dnn::dnn4_v20191202::readNetFromDarknet'

        at org.bytedeco.opencv.global.opencv_dnn.readNetFromDarknet(Native Method)
        at ch.bildspur.vision.YOLONetwork.setup(YOLONetwork.java:42)
        at app.App.setup(App.java:29)
        at processing.core.PApplet.handleDraw(PApplet.java:2401)
        at processing.awt.PSurfaceAWT$12.callDraw(PSurfaceAWT.java:1557)
        at processing.core.PSurfaceNone$AnimationThread.run(PSurfaceNone.java:316)
PS C:\Users\gxous\Desktop\χου\προγραμματισμός\java\TestDeepVision\TestDeepVision> 

Then, make sure that your project's directory doesn't have any non-latin characters like mines and your issue will be solved.
Also: iArunava/YOLOv3-Object-Detection-with-OpenCV#8
(if you want you can mark it as "good first issue")

fix examples for FX2d

Thanks for this great library.

Regarding the examples. In Processing 4 you need to explicitly import the FX2D library

Please use Sketch → Import Library to add JavaFX to your sketch.

Adding import processing.javafx.*; solves this issue.

However, I don't know if FX2D is really needed, just changing the size by using the default Processing mode worked fine for me too.
size(560, 560, FX2D); >>> size(560, 560);

Nullpointer exception with Video Library and Processing 4

Currently it's not possible to use processing 4 and the video library to inference webcam images. Here the stacktrace:

Processing video library using GStreamer 1.16.2
java.lang.NullPointerException
	at ch.bildspur.vision.util.CvProcessingUtils.toCv(CvProcessingUtils.java:104)
	at ch.bildspur.vision.network.BaseNeuralNetwork.convertToMat(BaseNeuralNetwork.java:28)
	at ch.bildspur.vision.network.BaseNeuralNetwork.run(BaseNeuralNetwork.java:18)
	at YOLOWebcamExample.draw(YOLOWebcamExample.java:71)
	at processing.core.PApplet.handleDraw(PApplet.java:2460)
	at processing.javafx.PSurfaceFX$1.handle(PSurfaceFX.java:91)
	at processing.javafx.PSurfaceFX$1.handle(PSurfaceFX.java:87)
	at com.sun.scenario.animation.shared.TimelineClipCore.visitKeyFrame(TimelineClipCore.java:239)
	at com.sun.scenario.animation.shared.TimelineClipCore.playTo(TimelineClipCore.java:197)
	at javafx.animation.Timeline.doPlayTo(Timeline.java:177)
	at javafx.animation.AnimationAccessorImpl.playTo(AnimationAccessorImpl.java:39)
	at com.sun.scenario.animation.shared.InfiniteClipEnvelope.timePulse(InfiniteClipEnvelope.java:110)
	at javafx.animation.Animation.doTimePulse(Animation.java:1101)
	at javafx.animation.Animation$1.lambda$timePulse$0(Animation.java:186)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at javafx.animation.Animation$1.timePulse(Animation.java:185)
	at com.sun.scenario.animation.AbstractMasterTimer.timePulseImpl(AbstractMasterTimer.java:344)
	at com.sun.scenario.animation.AbstractMasterTimer$MainLoop.run(AbstractMasterTimer.java:267)
	at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:515)
	at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:499)
	at com.sun.javafx.tk.quantum.QuantumToolkit.pulseFromQueue(QuantumToolkit.java:492)
	at com.sun.javafx.tk.quantum.QuantumToolkit.lambda$runToolkit$11(QuantumToolkit.java:320)
	at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:96)
	at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
	at com.sun.glass.ui.win.WinApplication.lambda$runLoop$3(WinApplication.java:174)
	at java.base/java.lang.Thread.run(Thread.java:834)
NullPointerException
NullPointerException
WARNING: no real random source present!

MidasDepthEstimationWebcam example throws error

Hi Florian
Thank for your update to deepvision (0.6.0) library

Whe using Processing 4.0.3Alpha the MidasDepthEstimationWebcam example throws:
RuntimeException: resize() not implemented for this PImage type
using Processing video library using GStreamer 1.16.2

works with Processing 3.5.4

kind regards

Cannot use predefined weights in onnx format for YoloV5n, error is generated.

Dear cansik,

Thank you for providing the facility to use custom trained networks in onnx format with your Deep Vision library for Processing (V4, windows 10 and 11).

I've trained a custom detector in VS code using YoloV5n and exported it to onnx format, along with a file describing the labels (there is only a single class to be detected).

Unfortunately when I try to run detections on a single image I get the error below:

java.lang.NullPointerException: Cannot invoke "org.bytedeco.opencv.opencv_dnn.Net.setInput(org.bytedeco.opencv.opencv_core.Mat)" because "this.net" is null
at ch.bildspur.vision.YOLONetwork.run(YOLONetwork.java:81)
at ch.bildspur.vision.YOLONetwork.run(YOLONetwork.java:20)
at ch.bildspur.vision.network.BaseNeuralNetwork.run(BaseNeuralNetwork.java:21)
at YOLOv5.setup(YOLOv5.java:99)
at processing.core.PApplet.handleDraw(PApplet.java:2051)
at processing.awt.PSurfaceAWT$9.callDraw(PSurfaceAWT.java:1386)
at processing.core.PSurfaceNone$AnimationThread.run(PSurfaceNone.java:356)
Hi

I thought it might be an issue setting the path to the model and labels. The labels and onnx file are both contained in the sketch's data folder The paths are shown below (some private info removed):

pl: C:\Users\xxxj\xxx\xxx\xxx\src\proc4sketches\YOLOv5\data\classes.txt
pm: C:\Users\jxxxj\xxx\xxx\xxx\src\proc4sketches\YOLOv5\data\best.onnx

below is the setup part of the processing sketch if that is any help:

network = new YOLONetwork(
null, // for yolov5 the weights already contains the model in onnx format
pm, // trained weights of the model
640, 640, // inference size
false
);

network.loadLabels(pl);
network.setConfidenceThreshold(0.5f);
//network.setTopK(100);
//if (network == null)
if (loadimage == null)
println("it is null");

do
{
Thread.yield();
println("waiting ...");
}
while (network == null);
// note - waiting is only printing once, suggesting that the network was created in the constructor

try{
detections = network.run(loadimage);
}catch(NullPointerException e){e.printStackTrace();}
println("Hi");
}
// note the error I mentioned before is displayed from the stacktrace

Many thanks for any suggestions you can provide. I am happy to PM the complete sketch code to you.

Sincerely,

Zeffman

Congrats, Installation and FPS

Hey @cansik,

first of all: Congratulations and big thanks for your hard work. Stumbled upon your git when reading your comment in Linzaers git.

Since I have no clue about Java, is there an easy way to install and/or are you planning to include an installation guide?

The 40fps on CPU grabbed my attention: With what input resolution was this achieved?
And are you planning to add GPU support e.g. via cuda?
Asking all of those questions because my input is a 1920x1080 video for which I try to achieve real time detection.

Thanks in advance

CUDA Enabled error

Hi,

I trying to run some of the examples on CUDA and I am getting the following error:

[ WARN:0] global modules\dnn\src\dnn.cpp (1442) cv::dnn::dnn4_v20201117::Net::Impl::setUpNet DNN module was not built with CUDA backend; switching to CPU

I read something about incompatibilities with OpenCV for Processing, and for some google results I guessing that maybe it can be related to that.

On the other hand, congratulations on this project!

use P2D / P3D without loadPixels

I want to dig into this later, but I was wondering if it is possible to use let's say the face detection with P2D instead of FX2D?
I need to use shaders too, so FX2D is not really an option for me.

But with P2D, I don't get any detections, unless I use loadPixels, which is really bad for the framerate.

I'm using 0.9.0.

OpenCV for Processing

OpenCV for Processing uses the direct OpenCV Java bindings instead of JavaCV. Please only include either one library, because Processing gets confused if two OpenCV packages are imported.

Is there a way to rewrite the examples code using OpenCV for Processing, to use the OpenCV included in DeepVision.
I like students to play around with this, but installing/removing libraries might be too much hassle.

How would I import OpenCV with this library.
import gab.opencv.*;
And what would be the OpenCV object instead of OpenCV cv; .

Or is it not that simple?

DeepSort NN - feature request

Hi,
first of all thanks for the awesome library! I am testing it right now and it is very nice to see YOLO4tiny implemented. However, I am into tracking stuff and while the YOLO could be implemented into a more traditional blob tracking pipeline I would like to see a proper DeepSORT implementation.

Please have a look at this: https://github.com/theAIGuysCode/yolov4-deepsort

I have already run the DeepSort from that repository on separate computers and it works very well for tracking purposes. As the DeepSort is just YOLO+SORT it might not be that hard to implement since you have YOLO already and the linked GitHub repo is using OpenCV DNN as well.

Let me know if you feel like doing it and if there is something I can assist you with.

Installation of 0.8.1 throws errors - I have also tested 0.8.0 and 0.7.0

INSTALLATION NOTES:

Running nvidia RTX3080, processor Intel i9-11900K, Windows 10 64bit
Processing 4b07
using Cansik Deepvision 0.8.1 pre-release library compiled for win64 x86:
getting error:

RuntimeException: OpenCV(4.5.5) D:\a\javacpp-presets\javacpp-presets\opencv\cppbuild\windows-x86_64-gpu\opencv-4.5.5\modules\dnn\src\cuda4dnn\csl\memory.hpp:54: error: (-217:Gpu API call) the provided PTX was compiled with an unsupported toolchain. in function 'cv::dnn::cuda4dnn::csl::ManagedPtr::ManagedPtr'

updating GPU driver to:
511.79-desktop-win10-win11-64bit-international-dch-whql
after the update of the driver a i get the error:

Could not run the sketch (Target VM failed to initialize).
For more information, read Help ? Troubleshooting.

What worked for me was a version of the library 0.7.0 - universal release download - for all platfroms release from google drive.

The specific win64 x86 did not worked for me - even the 0.7.0 version. It showed error:

Warning: Unable to load properties : ZipFile invalid LOC header (bad signature)
ExceptionInInitializerError

The universal 0.7.0 version did work but first I had to remove the processing core files from the library.
It shows error:

The library deepvision cannot be used because it contains the processing.core libraries. Please contact the library author for the update.

When i manully remove the file: core-3.3.7.jar from the library folder it works and run the yolo webcam example.

Do you have any idea what I could try to get around VM failed to initialized error? Perhaps the GPU driver was too old initially but now it is too new for the library? Can I make some log for you?

can we implement Yolov 5 and 6 with custom data weights

can we implement Yolov 5 and 6 custom on this too , and possibly allowing users to add their own custom trained data weights in the form if .pt file and .yaml files , id be also interested as the current deep vision processing I cant find any information of implementing custom datasets

thanks

Dear cansic, Regarding the ‘Deep Vision Processing’ library

Dear cansic, hello. How are you?

I’m sending this message regarding the ‘Deep Vision Processing library’.

Do you have any plans to update yolo-v5 to run as well?
I hope you can help. please.

If you have any plans, I am curious about your schedule.
I hope it gets updated. (yolo-v5) Thank you for your review.

May you always be full of good things. And thanks for the nice library and environment.

Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.