Coder Social home page Coder Social logo

jing-vision / lightnet Goto Github PK

View Code? Open in Web Editor NEW
101.0 9.0 26.0 5.45 MB

Lightweight turnkey solution for AI

License: MIT License

Batchfile 3.68% Python 47.30% Lua 2.86% C++ 42.56% C 3.60%
yolo darknet yolov2 yolov3 opencv cuda detection classification ai deep-learning deep-neural-networks pose-estimation yolo2 cudnn gpu dnn neural-network computer-vision

lightnet's Introduction

lightnet

lightnet is a turnkey solution to real world problems accelerated with deep learning AI technology, including but not limited to object detection, image classification and human pose estimation.

How to read the source code

This project is dependent on a few open-source projects:

  • modules/darknet - the main engine for training & inferencing.
  • modules/Yolo_mark - the toolkit to prepare training data for object detection.
  • modules/yolo2_light - lightweighted inferencing engine [optional].
  • modules/cvui - lightweighted GUI based purely on OpenCV.
  • moudles/pytorch-caffe-darknet-convert - DL framework model converter
  • modules/minitrace - library to generate tracing logs for Chrome "about:tracing"
  • modules/readerwriterqueue - single-producer, single-consumer lock-free queue for C++
  • modules/bhtsne - Barnes-Hut implementation of the t-SNE algorithm

How to build from Visual Studio 2015

Install NVIDIA SDK

  • Download CUDA 11.x

  • Download cuDNN v8.x

    • Extract to the same folder as CUDA SDK
    • e.g. c:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\
  • Download zlib

Install OpenCV

Build it

Execute the batch file

build.bat

Object Detection - inference w/ pre-trained weights

First you need to download the weights. You can read more details on darknet website.

cfg weights names
cfg/yolov2.cfg https://pjreddie.com/media/files/yolov2.weights data/coco.names
cfg/yolov2-tiny.cfg https://pjreddie.com/media/files/yolov2-tiny.weights coco.names
cfg/yolo9000.cfg http://pjreddie.com/media/files/yolo9000.weights cfg/9k.names
cfg/yolov3.cfg https://pjreddie.com/media/files/yolov3.weights cfg/coco.names
cfg/yolov3-openimages.cfg https://pjreddie.com/media/files/yolov3-openimages.weights data/openimages.names
cfg/yolov3-tiny.cfg https://pjreddie.com/media/files/yolov3-tiny.weights cfg/coco.names
cfg/yolov2_shoe.cfg yolov2_shoe.weights obj.names
cfg/yolov4.cfg https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights cfg/coco.names

Syntax for object detection

darknet.exe detector demo <data> <cfg> <weights> -c <camera_idx> -i <gpu_idx>
darknet.exe detector demo <data> <cfg> <weights> <video_filename> -i <gpu_idx>
darknet.exe detector test <data> <cfg> <weights> <img_filename> -i <gpu_idx>

Default launch device combination is -i 0 -c 0.

Run yolov4 on camera #0

darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights

Run yolov3 on camera #0

darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights

Run yolo9000 on camera #0

darknet.exe detector demo cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights

Run yolo9000 on images

darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights

Run yolo9000 CPU on camera #0

darknet_no_gpu.exe detector demo cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights

Object Detection - label images manually

Object Detection - train yolo v2 network

  1. Fork __template-yolov2 to my-yolo-net

  2. Download pre-trained weights for the convolutional layers: http://pjreddie.com/media/files/darknet19_448.conv.23 to bin/darknet19_448.conv.23

  3. To training for your custom objects, you should change 2 lines in file obj.cfg:

  • change classes in obj.data#L1
  • set number of classes (objects) in obj.cfg#L230
  • set filter-value equal to (classes + 5)*5 in obj.cfg#L224
  1. Run my-yolo-net/train.cmd

Object Detection - train yolo v3 network

  1. Fork __template-yolov3 to my-yolo-net

  2. Download pre-trained weights for the convolutional layers: http://pjreddie.com/media/files/darknet53.conv.74 to bin/darknet53.conv.74

  3. Create file obj.cfg with the same content as in yolov3.cfg (or copy yolov3.cfg to obj.cfg) and:

  • change line batch to batch=64
  • change line subdivisions to subdivisions=8
  • change line classes=80 to your number of objects in each of 3 [yolo]-layers:
    • obj.cfg#L610
    • obj.cfg#L696
    • obj.cfg#L783
  • change [filters=255] to filters=(classes + 5)x3 in the 3 [convolutional] before each [yolo] layer
    • obj.cfg#L603
    • obj.cfg#L689
    • obj.cfg#L776

So if classes=1 then should be filters=18. If classes=2 then write filters=21.

(Do not write in the cfg-file: filters=(classes + 5)x3)

(Generally filters depends on the classes, coords and number of masks, i.e. filters=(classes + coords + 1)*<number of mask>, where mask is indices of anchors. If mask is absence, then filters=(classes + coords + 1)*num)

So for example, for 2 objects, your file obj.cfg should differ from yolov3.cfg in such lines in each of 3 [yolo]-layers:

  [convolutional]
  filters=21

  [region]
  classes=2

Image Classification - inference w/ pre-trained weights

Again, you need download weights first. You can read more details on darknet website.

cfg weights
cfg/alexnet.cfg https://pjreddie.com/media/files/alexnet.weights
cfg/vgg-16.cfg https://pjreddie.com/media/files/vgg-16.weights
cfg/extraction.cfg https://pjreddie.com/media/files/extraction.weights
cfg/darknet.cfg https://pjreddie.com/media/files/darknet.weights
cfg/darknet19.cfg https://pjreddie.com/media/files/darknet19.weights
cfg/darknet19_448.cfg https://pjreddie.com/media/files/darknet19_448.weights
cfg/darknet53.cfg https://pjreddie.com/media/files/darknet53.weights
cfg/resnet50.cfg https://pjreddie.com/media/files/resnet50.weights
cfg/resnet152.cfg https://pjreddie.com/media/files/resnet152.weights
cfg/densenet201.cfg https://pjreddie.com/media/files/densenet201.weights

Image Classification - train darknet19_448 network

  1. Fork __template-darknet19_448 to my-darknet19-net

  2. Download pre-trained weights for the convolutional layers: http://pjreddie.com/media/files/darknet19_448.conv.23 to bin/darknet19_448.conv.23

  3. Create file obj.cfg with the same content as in darknet19_448.cfg (or copy darknet19_448.cfg to obj.cfg) and:

  • set batch to 128 or 64 or 32 depends on your GPU memory in darknet19-classify.cfg#L4
  • change line to subdivisions=4
  • set filter-value equal to classes in darknet19-classify.cfg#L189

Human Pose Estimation - inference w/ pre-trained weights

This project lives in DancingGaga

For more details, please check the README there.

[Weight file] (darknet version openpose.weight)

https://drive.google.com/open?id=1BfY0Hx2d2nm3I4JFh0W1cK2aHD1FSGea

FAQ

blobFromImage() vs letterbox_image() vs resize_image()

AlexeyAB/darknet#232 (comment)

How to fix CUDA Error: no kernel image is available for execution on the device?

# Tesla V100
# ARCH= -gencode arch=compute_70,code=[sm_70,compute_70]

# GeForce RTX 2080 Ti, RTX 2080, RTX 2070, Quadro RTX 8000, Quadro RTX 6000, Quadro RTX 5000, Tesla T4, XNOR Tensor Cores
# ARCH= -gencode arch=compute_75,code=[sm_75,compute_75]

# Jetson XAVIER
# ARCH= -gencode arch=compute_72,code=[sm_72,compute_72]

# GTX 1080, GTX 1070, GTX 1060, GTX 1050, GTX 1030, Titan Xp, Tesla P40, Tesla P4
# ARCH= -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61

# GP100/Tesla P100 - DGX-1
# ARCH= -gencode arch=compute_60,code=sm_60

# For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment:
# ARCH= -gencode arch=compute_53,code=[sm_53,compute_53]

# For Jetson Tx2 or Drive-PX2 uncomment:
# ARCH= -gencode arch=compute_62,code=[sm_62,compute_62]

How to fine tune a existing network?

https://github.com/pjreddie/darknet/wiki/YOLO:-Real-Time-Object-Detection

darknet.exe partial cfg/darknet.cfg darknet.weights darknet.conv.13 13 darknet.exe partial cfg/extraction.cfg extraction.weights extraction.conv.23 23 darknet.exe partial cfg/darknet19.cfg darknet19.weights darknet19.conv.23 23 darknet.exe partial cfg/darknet19_448.cfg darknet19_448.weights darknet19_448.conv.23 23 darknet.exe partial cfg/darknet53.cfg darknet53.weights darknet53.conv.74 74 darknet.exe partial cfg/resnet50.cfg resnet50.weights resnet50.conv.66 66

Explanation of yolo training output

https://github.com/rafaelpadilla/darknet#faq_yolo

CFG Parameters

How to use scripts folder?

pip install -r scripts/requirements.txt

lightnet's People

Contributors

vinjn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lightnet's Issues

where is get_network_layer() code

Hi,

i'm trying to build feature-viz, i included lightnet but when compiling get_network_layer is undefined.
where is the definition/source code to include?

Thanks

cuDNN Error: CUDNN_STATUS_BAD_PARAM

Hi,
With vs2015, CUDA 10.0, cudnn 7.6.2
After build, I execute :

darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights

and report the fllowing error

cuDNN status Error in: file: ..\modules\darknet\src\convolutional_layer.c : cudnn_convolutional_setup() : line: 236 : build time: Aug  7 2019 - 07:13:29
cuDNN Error: CUDNN_STATUS_BAD_PARAM

thx

feature-viz

您好:
請問執行feature-viz這專案之前需要事先建立那些環境呢?我目前已經照步驟建立了vs2015的資料夾,但執行sln檔的時候遇到許多error。我的方式是把ligntnet載下來,並在feature-viz資料夾下執行,謝謝!

Performance and accuracy of yolov2_light

@vinjn hello thanks for the wonderful source code , i had few queries .

  1. what is the weight size of the yolov2_light model
  2. have you performed any quantization and pruning techniques for these models
  3. what is the performance achieved with loss of accuracy

Jing-pose error

Hey again, you must love me asking all these questions lol.

I generated the vs files for jing pose and compiled with no errors.
I get the jing-pose.exe in bin folder, ive downloaded the openpose.weight file listed on the github main page and used the openpose.cfg file located in jing-pose irectory.

But I get the following issue:

C:\lightnet\bin>jing-pose.exe openpose.cfg openpose.weight person.jpg
Failed to open: openpose.cfg
warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:808)
warning: openpose.weight (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:809)
Failed to open: openpose.weight
Try to load cfg: openpose.cfg, weights: openpose.weight, clear = 0
batch: Using default '1'
learning_rate: Using default '0.001000'
momentum: Using default '0.900000'
decay: Using default '0.000100'
subdivisions: Using default '1'
policy: Using default 'constant'
max_batches: Using default '0'
layer filters size input output
0 conv 64 3 x 3 / 1 200 x 200 x 3 -> 200 x 200 x 64 0.138 BF
1 conv 64 3 x 3 / 1 200 x 200 x 64 -> 200 x 200 x 64 2.949 BF
2 max 2 x 2 / 2 200 x 200 x 64 -> 100 x 100 x 64 0.003 BF
3 conv 128 3 x 3 / 1 100 x 100 x 64 -> 100 x 100 x 128 1.475 BF
4 conv 128 3 x 3 / 1 100 x 100 x 128 -> 100 x 100 x 128 2.949 BF
5 max 2 x 2 / 2 100 x 100 x 128 -> 50 x 50 x 128 0.001 BF
6 conv 256 3 x 3 / 1 50 x 50 x 128 -> 50 x 50 x 256 1.475 BF
7 conv 256 3 x 3 / 1 50 x 50 x 256 -> 50 x 50 x 256 2.949 BF
8 conv 256 3 x 3 / 1 50 x 50 x 256 -> 50 x 50 x 256 2.949 BF
9 conv 256 3 x 3 / 1 50 x 50 x 256 -> 50 x 50 x 256 2.949 BF
10 max 2 x 2 / 2 50 x 50 x 256 -> 25 x 25 x 256 0.001 BF
11 conv 512 3 x 3 / 1 25 x 25 x 256 -> 25 x 25 x 512 1.475 BF
12 conv 512 3 x 3 / 1 25 x 25 x 512 -> 25 x 25 x 512 2.949 BF
13 conv 256 3 x 3 / 1 25 x 25 x 512 -> 25 x 25 x 256 1.475 BF
14 conv 128 3 x 3 / 1 25 x 25 x 256 -> 25 x 25 x 128 0.369 BF
15 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
16 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
17 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
18 conv 512 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 512 0.082 BF
19 conv 38 1 x 1 / 1 25 x 25 x 512 -> 25 x 25 x 38 0.024 BF
20 route 14
21 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
22 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
23 conv 128 3 x 3 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.184 BF
24 conv 512 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 512 0.082 BF
25 conv 19 1 x 1 / 1 25 x 25 x 512 -> 25 x 25 x 19 0.012 BF
26 route 19 25 14
27 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
28 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
29 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
30 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
31 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
32 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
33 conv 38 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 38 0.006 BF
34 route 26
35 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
36 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
37 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
38 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
39 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
40 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
41 conv 19 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 19 0.003 BF
42 route 33 41 14
43 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
44 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
45 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
46 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
47 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
48 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
49 conv 38 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 38 0.006 BF
50 route 42
51 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
52 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
53 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
54 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
55 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
56 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
57 conv 19 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 19 0.003 BF
58 route 49 57 14
59 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
60 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
61 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
62 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
63 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
64 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
65 conv 38 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 38 0.006 BF
66 route 58
67 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
68 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
69 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
70 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
71 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
72 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
73 conv 19 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 19 0.003 BF
74 route 65 73 14
75 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
76 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
77 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
78 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
79 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
80 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
81 conv 38 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 38 0.006 BF
82 route 74
83 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
84 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
85 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
86 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
87 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
88 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
89 conv 19 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 19 0.003 BF
90 route 81 89 14
91 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
92 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
93 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
94 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
95 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
96 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
97 conv 38 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 38 0.006 BF
98 route 90
99 conv 128 7 x 7 / 1 25 x 25 x 185 -> 25 x 25 x 128 1.450 BF
100 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
101 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
102 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
103 conv 128 7 x 7 / 1 25 x 25 x 128 -> 25 x 25 x 128 1.004 BF
104 conv 128 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 128 0.020 BF
105 conv 19 1 x 1 / 1 25 x 25 x 128 -> 25 x 25 x 19 0.003 BF
106 route 105 97
Total BFLOPS 80.306
Loading weights from openpose.weight...
seen 32
Done!
OpenCV Error: Assertion failed (!input.empty()) in create_netsize_im, file ..\src\post_process.cpp, line 495

activation_kernels.cu' does not exist?

Hi, Im trying to compile darknet but get the following error message in 2015:

Severity Code Description Project File Line Suppression State
Error The path specified for SourceFile at 'C:\lightnet\lightnet\modules\darknet\src\activation_kernels.cu' does not exist. darknet C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\BuildCustomizations\CUDA 9.1.targets 428

An help would be great, Cheers J

How to configure the visualization

How to configure the visualization file? Is similar to
feature-viz.exe -cfg=cfg\alexnet.cfg -weights=alexnet.weights
Feature-viz.exe -cfg=cfg\darknet.cfg -weights=darknet.weights
Feature-viz.exe -cfg=cfg\darknet19.cfg -weights=darknet19.weights
How to use it?

submodule checkout error

This is the error message:

fatal: reference is not a tree: 6bb99873873f0fde08e8ecde754d5cb0e0ff97b8
Unable to checkout '6bb99873873f0fde08e8ecde754d5cb0e0ff97b8' in submodule path 'modules/yolo2_light'

The repository yolo2_light is not up to date.

Feature visualization

Hi! Thank you for sharing your work. I want to use feature-viz with CPU, is that possible?

I have compiled yolo lib for CPU but there is a function in lightnet.cpp that seems to need the GPU: cuda_pull_array(l->output_gpu, l->output, l->outputs*l->batch))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.