fabianplum / omnitrax Goto Github PK
View Code? Open in Web Editor NEWDeep learning-driven multi animal tracking and pose estimation add-on for Blender
License: MIT License
Deep learning-driven multi animal tracking and pose estimation add-on for Blender
License: MIT License
Status | Count |
---|---|
π Total | 197 |
β Successful | 194 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 1 |
I think this is a user-specific folder for defining the settings of a Pycharm project.
I see it's in your gitignore (so maybe a slip?)
I wasn't able to run the test locally in my machine - would it be possible to include steps for this?
A good place for this could be in the CONTRIBUTING.md file, since after implementing a feature one would want to run the tests to check nothing broke.
It would be nice to also add other dependencies required for developing, maybe with a requirements-dev.txt
file (to run pip install -r requirements-dev.txt
in the desired environment).
Some questions about dependencies came up while reviewing:
Hi,
Below some suggestions on the contributing guidelines - IMO only the first one would be a required change.
.github
directory, I would suggest moving it to the root directory of the project.Thanks!
Hi there! I believe the documentation could benefit from additional details regarding this topic, as mentioned in your software paper.
A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
Thank you!
Status | Count |
---|---|
π Total | 185 |
β Successful | 182 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 1 |
Hi @FabianPlum ! 'm nearly finished with the checks, but I've noticed the codebase could greatly benefit from more descriptive docstrings and some reformatting. To enhance consistency and readability, I recommend choosing a single docstring styleβeither Numpy, Google, or PEP257βthat you feel best suits your project.
For implementing these improvements, ruff
can be a good tool.
You can configure ruff
to enforce your chosen docstring style and other linting rules by creating a pyproject.toml
file with the necessary settings.
When you run ruff check .
, it will automatically apply these configurations.
For more detailed instructions on setting up and using ruff, please see: https://pypi.org/project/ruff/0.0.221/.
Describe the bug
V0.2.2 crash to desktop during tracking.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Just work :)
Screenshots
X
Desktop (please complete the following information):
Additional context
Blender refuses to log crashlogs. Open blender through command prompt to keep a console open.
Console notes a hardcoded path to a yolo file which does not exist, user "Plumstation" ?
LOG:
`INFO: successfully loaded OmniTrax
Found computational devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Read blend: K:\Blender3D\OmniTrax\Saved\OmniTrax1.blend
2023-05-05 00:48:18.303125: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-05 00:48:18.775515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6440 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1070 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1
Running inference on: [LogicalDevice(name='/device:CPU:0', device_type='CPU'), LogicalDevice(name='/device:GPU:0', device_type='GPU')]
INFO: Initialising darkent network...
<bpy_struct, MaskSpline at 0x000001B09A987308>
0.46663743257522583 0.849280059337616
0.6812906265258789 0.8212818503379822
0.7046225070953369 0.5366330146789551
0.4573046863079071 0.5646312832832336
[[[0.46663743257522583, 0.849280059337616], [0.6812906265258789, 0.8212818503379822], [0.7046225070953369, 0.5366330146789551], [0.4573046863079071, 0.5646312832832336]]]
[[503 162]
[735 193]
[760 500]
[493 470]]
Beginning counting from ID 0
INITIALISED TRACKER!
The imported clip: K:\Blender3D\OmniTrax\Saved..\SourceContent\Recordings\Insect_Ant\single_ant_1080p.mp4 has a total of 2000 frames.
Try to load cfg: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.cfg, weights: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights, clear = 0
0 : compute_capability = 610, cudnn_half = 0, GPU: NVIDIA GeForce GTX 1070 Ti
net.optimized_memory = 0
mini_batch = 1, batch = 8, time_steps = 1, train = 0
layer filters size/strd(dil) input output
0 Create CUDA-stream - 0
Create cudnn-handle 0
conv 32 3 x 3/ 1 512 x 512 x 3 -> 512 x 512 x 32 0.453 BF
1 conv 64 3 x 3/ 2 512 x 512 x 32 -> 256 x 256 x 64 2.416 BF
2 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
3 route 1 -> 256 x 256 x 64
4 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
5 conv 32 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 32 0.268 BF
6 conv 64 3 x 3/ 1 256 x 256 x 32 -> 256 x 256 x 64 2.416 BF
7 Shortcut Layer: 4, wt = 0, wn = 0, outputs: 256 x 256 x 64 0.004 BF
8 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
9 route 8 2 -> 256 x 256 x 128
10 conv 64 1 x 1/ 1 256 x 256 x 128 -> 256 x 256 x 64 1.074 BF
11 conv 128 3 x 3/ 2 256 x 256 x 64 -> 128 x 128 x 128 2.416 BF
12 conv 64 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 64 0.268 BF
13 route 11 -> 128 x 128 x 128
14 conv 64 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 64 0.268 BF
15 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
16 conv 64 3 x 3/ 1 128 x 128 x 64 -> 128 x 128 x 64 1.208 BF
17 Shortcut Layer: 14, wt = 0, wn = 0, outputs: 128 x 128 x 64 0.001 BF
18 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
19 conv 64 3 x 3/ 1 128 x 128 x 64 -> 128 x 128 x 64 1.208 BF
20 Shortcut Layer: 17, wt = 0, wn = 0, outputs: 128 x 128 x 64 0.001 BF
21 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
22 route 21 12 -> 128 x 128 x 128
23 conv 128 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 128 0.537 BF
24 conv 256 3 x 3/ 2 128 x 128 x 128 -> 64 x 64 x 256 2.416 BF
25 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
26 route 24 -> 64 x 64 x 256
27 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
28 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
29 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
30 Shortcut Layer: 27, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
31 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
32 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
33 Shortcut Layer: 30, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
34 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
35 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
36 Shortcut Layer: 33, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
37 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
38 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
39 Shortcut Layer: 36, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
40 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
41 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
42 Shortcut Layer: 39, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
43 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
44 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
45 Shortcut Layer: 42, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
46 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
47 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
48 Shortcut Layer: 45, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
49 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
50 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
51 Shortcut Layer: 48, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
52 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
53 route 52 25 -> 64 x 64 x 256
54 conv 256 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 256 0.537 BF
55 conv 512 3 x 3/ 2 64 x 64 x 256 -> 32 x 32 x 512 2.416 BF
56 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
57 route 55 -> 32 x 32 x 512
58 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
59 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
60 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
61 Shortcut Layer: 58, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
62 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
63 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
64 Shortcut Layer: 61, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
65 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
66 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
67 Shortcut Layer: 64, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
68 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
69 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
70 Shortcut Layer: 67, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
71 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
72 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
73 Shortcut Layer: 70, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
74 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
75 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
76 Shortcut Layer: 73, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
77 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
78 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
79 Shortcut Layer: 76, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
80 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
81 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
82 Shortcut Layer: 79, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
83 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
84 route 83 56 -> 32 x 32 x 512
85 conv 512 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 512 0.537 BF
86 conv 1024 3 x 3/ 2 32 x 32 x 512 -> 16 x 16 x1024 2.416 BF
87 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
88 route 86 -> 16 x 16 x1024
89 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
90 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
91 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
92 Shortcut Layer: 89, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
93 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
94 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
95 Shortcut Layer: 92, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
96 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
97 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
98 Shortcut Layer: 95, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
99 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
100 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
101 Shortcut Layer: 98, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
102 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
103 route 102 87 -> 16 x 16 x1024
104 conv 1024 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x1024 0.537 BF
105 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
106 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
107 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
108 max 5x 5/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.003 BF
109 route 107 -> 16 x 16 x 512
110 max 9x 9/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.011 BF
111 route 107 -> 16 x 16 x 512
112 max 13x13/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.022 BF
113 route 112 110 108 107 -> 16 x 16 x2048
114 conv 512 1 x 1/ 1 16 x 16 x2048 -> 16 x 16 x 512 0.537 BF
115 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
116 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
117 conv 256 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 256 0.067 BF
118 upsample 2x 16 x 16 x 256 -> 32 x 32 x 256
119 route 85 -> 32 x 32 x 512
120 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
121 route 120 118 -> 32 x 32 x 512
122 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
123 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
124 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
125 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
126 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
127 conv 128 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 128 0.067 BF
128 upsample 2x 32 x 32 x 128 -> 64 x 64 x 128
129 route 54 -> 64 x 64 x 256
130 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
131 route 130 128 -> 64 x 64 x 256
132 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
133 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
134 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
135 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
136 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
137 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
138 conv 255 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 255 0.535 BF
139 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.20
nms_kind: greedynms (1), beta = 0.600000
140 route 136 -> 64 x 64 x 128
141 conv 256 3 x 3/ 2 64 x 64 x 128 -> 32 x 32 x 256 0.604 BF
142 route 141 126 -> 32 x 32 x 512
143 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
144 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
145 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
146 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
147 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
148 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
149 conv 255 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 255 0.267 BF
150 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000
151 route 147 -> 32 x 32 x 256
152 conv 512 3 x 3/ 2 32 x 32 x 256 -> 16 x 16 x 512 0.604 BF
153 route 152 116 -> 16 x 16 x1024
154 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
155 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
156 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
157 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
158 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
159 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
160 conv 255 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 255 0.134 BF
161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 91.095
avg_outputs = 757643
Allocate additional workspace_size = 9.44 MB
Try to load weights: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights
Loading weights from K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights...
seen 64, trained: 0 K-images (0 Kilo-batches_64)
Done! Loaded 162 layers from weights-file
Couldn't open file: C:/Users/PlumStation/Desktop/OmniTrax_Testing/YOLOv4-COCO/coco.names
Error: Not freed memory blocks: 56843, total unfreed memory 21.423847 MB
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.`
If you install the addon from the zip file as instructed, it won't show up in the addon list.
It appears you need to have a folder directly within the zip file, for example "omni_trax" containing all the files before it will appear.
Thanks for the detailed instructions, especially with the tricky CUDA bits!
I really liked that enabling the addon nicely installs all the dependencies, and the screenshots and detailed steps will definitely make the tool accessible to a wider range of users.
I have some suggestions but none are vital, so feel free to take/leave them as you see fit.
Hope this helps!
I followed the instructions in the guide for installation in Ubuntu, but unfortunately got an error during the the test drive
When hitting TRACK I got:
OSError: /home/sminano/.config/blender/2.92/scripts/addons/omni_trax/darknet/libdarknet.so: cannot open shared object file: No such file or directory
Using Ubuntu 20.04 (and Blender 2.92.0).
Let me know if any more info is required!
Hi @FabianPlum! π
I tested MA pose estimation on Omitrax using the test data you provided:
OS: Ubuntu 20.04
Video : multiple_ants_1920x1080_01.mp4
YOLO Network : atta_single_class/yolov4-big_and_small_ants_320.cfg
DLC Network : DLC_ANT-POSE-MIXED_resnet_101_iteration-0_shuffle-1
With these parameters:
Was wondering if this is also what you are getting?
Thank you π
Hi @FabianPlum ! Thanks again for your hardwork. Your new codebase looks great!
While testing the notebook Tracking_Dataset_Processing.ipynb, at the the last cell, I encountered this error on your image append.
My matplotlib version is 3.8.3.
My proposed solution is to change
ax.images.append(im)
to
ax.add_image(im)
Let me know what you think :)
Thanks! :)
Hi Fabi,
The codebase is looking great! I think it's gonna be very useful for a lot of people.
I just had a quick go at running the tests locally in a linux machine (thanks for adding the steps! π ), and I got an error that looks like a refactoring side effect (omni_trax.utils not found
). I didn't dig further but I figured I'd let you know just in case.
Originally posted by @sfmig in #26 (comment)
Hi,
I had a go at the pose estimation full-frame tutorial and am getting this error when selecting to export the data:
Using C:\Users\sminano\Downloads\VID_20220201_160304.mp4 for pose estimation...
Traceback (most recent call last):
File "C:\Users\sminano\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\omni_trax\__init__.py", line 783, in execute
pose_output_file.write(pose_joint_header_l1 + "\n")
UnboundLocalError: local variable 'pose_joint_header_l1' referenced before assignment
Error: Python: Traceback (most recent call last):
File "C:\Users\sminano\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\omni_trax\__init__.py", line 783, in execute
pose_output_file.write(pose_joint_header_l1 + "\n")
UnboundLocalError: local variable 'pose_joint_header_l1' referenced before assignment
This are my pose estimation parameters:
I used the sample video provided VID_20220201_160304.mp4
I also noticed the Blender file explorer that pops up when filling in the DLC model guides you to select a file, but actually we need to select only a folder. Maybe a clarification could be added to the tutorial, something like 'double click Accept to select the parent folder' ?
Status | Count |
---|---|
π Total | 185 |
β Successful | 182 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 1 |
This issue is part of the JOSS review for this repo.
The repo provides a lot of very useful data, like sample footage and trained models, but I found that some of the links to the data were broken (specifically in docs/example_footage.md
and docs/tutorial-tracking.md
).
I used this precommit hook as a quick way to check all links in markdown files, below the summary output:
I leave you the full output below for reference (it may be useful to pin down the specific links).
(node:47806) [DEP0040] DeprecationWarning: The punycode
module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ...
to show where the warning was created)
FILE: docs/CUDA_installation_guide.md
[β] https://en.wikipedia.org/wiki/CUDA
[β] https://github.com/AlexeyAB/darknet
[β] https://github.com/DeepLabCut/DeepLabCut-live
[β] https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
[β] https://developer.nvidia.com/rdp/cudnn-archive
[β] ../images/omnitrax_logo.svg#gh-dark-mode-only
[β] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[β] CUDA_installation_images/CUDA_01.PNG
[β] CUDA_installation_images/CUDA_02.PNG
[β] CUDA_installation_images/CUDA_03.PNG
[β] CUDA_installation_images/CUDA_04.PNG
[β] CUDA_installation_images/CUDA_05.PNG
[β] CUDA_installation_images/CUDA_06.PNG
[β] CUDA_installation_images/CUDA_07.PNG
[β] CUDA_installation_images/CUDA_08.PNG
[β] CUDA_installation_images/CUDA_09.PNG
[β] CUDA_installation_images/CUDA_10.PNG
[β] CUDA_installation_images/CUDA_11.PNG
[β] CUDA_installation_images/CUDA_12.PNG
[β] CUDA_installation_images/CUDA_13.PNG
[β] CUDA_installation_images/CUDA_14.PNG
21 links checked.
FILE: .github/CONTRIBUTING.md
[β] #introduction
[β] #getting-started
[β] #contributing
[β] #reporting-bugs
[β] #suggesting-enhancements
[β] #code-contributions
[β] #contact
[β] mailto:[email protected]
[β] https://github.com/FabianPlum/OmniTrax/tree/main/docs
[β] https://github.com/FabianPlum/OmniTrax/issues
[β] https://github.com/FabianPlum/OmniTrax/issues/new/choose
[β] https://github.com/omnitrax/omnitrax
12 links checked.
ERROR: 1 dead links found!
[β] https://github.com/omnitrax/omnitrax β Status: 404
FILE: docs/example_footage.md
[β] ../README.md
[β] https://drive.google.com/file/d/1I0vla-CyTYpNIKNRJIzegxJ44WGyQ291/view?usp=share_link
[β] https://drive.google.com/file/d/1f417gbG7nt3xMIfKZgmr-3gPEUm_DPoJ/view?usp=share_link
[β] https://drive.google.com/file/d/1pa4hD-64JroByLavQZCvigMs7RGVxyvs/view?usp=share_link
[β] https://drive.google.com/file/d/1n-SRw7hswtMpaaXoGLuu_i1SFPgCBKoh/view?usp=share_link
[β] https://drive.google.com/file/d/1esvN2C4Egto_kZFWg5qGsETVphaa3aSi/view?usp=share_link
[β] https://drive.google.com/file/d/1rn4WUGyh8gotdC_UuVHIhqlqVn6sOhob/view?usp=share_link
[β] https://drive.google.com/file/d/10d2YuEpx62UOU8oQ1179XVxuKZOCbUZ5/view?usp=share_link
[β] https://drive.google.com/file/d/1X5fNkaEkALo1lgAu4HsKgzyZSAEapIq_/view?usp=share_link
[β] https://drive.google.com/file/d/109u6MyJFlLaiHaf08OavPWS8I6KvmPee/view?usp=share_link
[β] https://drive.google.com/file/d/1izoE7bLScQODYloV5B6bwzWtJ4jcqp1K/view?usp=sharing
[β] https://drive.google.com/file/d/1XzZmgkBUKeA3Q1YeYGMbwQoqsYtYgcjF/view?usp=sharing
[β] https://drive.google.com/drive/folders/14wBXFhV1KI4nD_TZXTrZssXqdOsWuDwk?usp=sharing
[β] ../images/omnitrax_logo.svg#gh-dark-mode-only
[β] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[β] ../images/preview_tracking.gif
[β] ../images/multi_ants_online_tracking_&_pose_estimation.gif
17 links checked.
ERROR: 1 dead links found!
[β] https://drive.google.com/file/d/10d2YuEpx62UOU8oQ1179XVxuKZOCbUZ5/view?usp=share_link β Status: 404
FILE: .github/ISSUE_TEMPLATE/bug_report.md
No hyperlinks found!
0 links checked.
(node:47807) [DEP0040] DeprecationWarning: The punycode
module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ...
to show where the warning was created)
FILE: docs/tutorial-tracking.md
[β] https://github.com/DeepLabCut/DeepLabCut-live
[β] https://www.mackenziemathislab.org/dlc-modelzoo
[β] trained_networks.md
[β] example_footage.md
[β] https://github.com/AlexeyAB/darknet
[β] https://github.com/DeepLabCut/DeepLabCut
[β] ../README.md
[β] ../images/example_ant_recording.mp4
[β] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[β] https://www.blender.org/download/lts/3-3/
[β] CUDA_installation_guide.md
[β] https://github.com/FabianPlum/FARTS
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.1
[β] tutorial-pose-estimation.md
[β] https://en.wikipedia.org/wiki/Kalman_filter
[β] https://en.wikipedia.org/wiki/Hungarian_algorithm
[β] https://github.com/FabianPlum/blenderMotionExport
[β] https://github.com/Amudtogal
[β] ../example_scripts/Tracking_Dataset_Processing.ipynb
[β] ..images/example_ant_recording.mp4
[β] ../example_scripts/example_ant_recording
[β] https://choosealicense.com/licenses/mit/
[β] ../images/omnitrax_logo.svg#gh-dark-mode-only
[β] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[β] ../images/use_01.jpg
[β] ../images/use_02.jpg
[β] ../images/use_03.jpg
[β] ../images/masking_01.png
[β] ../images/masking_02.png
[β] ../images/masking_03.png
[β] ../images/use_04.gif
[β] ../images/example_ant_tracked.gif
[β] ../example_scripts/_heatmap_of_ground_truth_tracks.svg
[β] ../images/ase_01.jpg
[β] ../images/ase_new_02.jpg
35 links checked.
ERROR: 2 dead links found!
[β] ..images/example_ant_recording.mp4 β Status: 400
[β] ../example_scripts/_heatmap_of_ground_truth_tracks.svg β Status: 400
FILE: docs/trained_networks.md
[β] https://github.com/AlexeyAB/darknet
[β] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[β] https://drive.google.com/drive/folders/11QXseJwISdodSnXJV6fwM97XfT2aXx2y?usp=sharing
[β] https://drive.google.com/drive/folders/1wQcfLlDUvnWthyzbvyVy9oqyTZ2F-JFo?usp=sharing
[β] https://drive.google.com/drive/folders/1U9jzOpjCcu6wDfTEH3uQqGKPxW_QzHGz?usp=sharing
[β] https://drive.google.com/drive/folders/1eXAowtyBsqGEjvmQE1YlSeHJ6AGBwpUs?usp=share_link
[β] https://github.com/AlexeyAB/darknet/wiki/YOLOv4-model-zoo
[β] https://github.com/DeepLabCut/DeepLabCut
[β] https://drive.google.com/drive/folders/1or1TF3tvi1iIzldEAia3G2RNKY5J7Qz4?usp=sharing
[β] https://drive.google.com/drive/folders/1FY3lAkAisOG_RIUBuaynz1OjBkzjH5LL?usp=sharing
[β] https://drive.google.com/file/d/1IH9R9PgJMYteigsrMi-bZnz4IMcydtWU/view?usp=sharing
[β] https://drive.google.com/drive/folders/1-DHkegHiTkWbO7YboXxDC5tU4Aa71-9z?usp=share_link
[β] https://drive.google.com/drive/folders/1BLulUYkwww7SfzXgSSVM71GLI4dQysP5?usp=share_link
[β] https://arxiv.org/abs/1605.03170
[β] https://www.mackenziemathislab.org/dlc-modelzoo
[β] tutorial-pose-estimation.md
[β] https://choosealicense.com/licenses/mit/
[β] ../images/omnitrax_logo.svg#gh-dark-mode-only
[β] ../images/omnitrax_logo_light.svg#gh-light-mode-only
19 links checked.
FILE: .github/ISSUE_TEMPLATE/feature_request.md
No hyperlinks found!
0 links checked.
FILE: README.md
[β] https://github.com/FabianPlum/OmniTrax/releases
[β] https://github.com/FabianPlum/OmniTrax
[β] https://www.python.org/
[β] https://app.travis-ci.com/github/FabianPlum/OmniTrax
[β] https://github.com/FabianPlum/FARTS
[β] https://youtu.be/YXxM4QRaCDU
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.3.1
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.3.0
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.3
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.2
[β] https://github.com/FabianPlum/OmniTrax/blob/main/docs/tutorial-tracking.md
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.1
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.0
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.3
[β] https://www.blender.org/download/lts/3-3/
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.2
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.1
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.0.2
[β] https://github.com/DeepLabCut/DeepLabCut-live
[β] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.0.1
[β] https://download.blender.org/release/Blender2.92/
[β] https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
[β] https://developer.nvidia.com/rdp/cudnn-archive
[β] https://www.tensorflow.org/install/source#gpu
[β] https://www.blender.org/download/release/Blender3.3/blender-3.3.1-windows-x64.msi/
[β] docs/CUDA_installation_guide.md
[β] https://github.com/FabianPlum/OmniTrax/releases/download/V_0.2.3/omni_trax.zip
[β] docs/tutorial-tracking.md
[β] docs/tutorial-pose-estimation.md
[β] https://github.com/AlexeyAB/darknet
[β] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[β] images/example_ant_recording.mp4
[β] docs/trained_networks.md
[β] docs/example_footage.md
[β] https://choosealicense.com/licenses/mit/
[β] https://img.shields.io/github/tag/FabianPlum/OmniTrax.svg?label=version&style=flat
[β] https://img.shields.io/github/license/FabianPlum/OmniTrax.svg?style=flat
[β] https://img.shields.io/badge/Made%20with-Python-1f425f.svg
[β] https://app.travis-ci.com/FabianPlum/OmniTrax.svg?branch=main
[β] images/omnitrax_logo.svg#gh-dark-mode-only
[β] images/omnitrax_logo_light.svg#gh-light-mode-only
[β] images/preview_tracking.gif
[β] images/single_ant_1080p_POSE_track_0.gif
[β] images/single_ant_1080p_POSE_track_0_skeleton.gif
[β] images/omnitrax_demo_screen_updated.jpg
[β] images/install_01.jpg
[β] images/install_02.jpg
[β] images/install_03.jpg
[β] images/install_04.jpg
[β] images/install_05.jpg
[β] images/install_06.jpg
[β] images/use_01.jpg
[β] images/use_02.jpg
[β] images/use_03.jpg
[β] images/use_04.gif
56 links checked.
(node:47808) [DEP0040] DeprecationWarning: The punycode
module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ...
to show where the warning was created)
FILE: docs/tutorial-pose-estimation.md
[β] https://github.com/DeepLabCut/DeepLabCut-live
[β] https://www.mackenziemathislab.org/dlc-modelzoo
[β] trained_networks.md
[β] example_footage.md
[β] https://github.com/AlexeyAB/darknet
[β] https://github.com/DeepLabCut/DeepLabCut
[β] ../README.md
[β] https://github.com/FabianPlum/FARTS
[β] https://drive.google.com/file/d/156t8r3ZHrkzC72jZapFl9OBFPqNIvIXg/view?usp=share_link
[β] https://drive.google.com/drive/folders/1-DHkegHiTkWbO7YboXxDC5tU4Aa71-9z?usp=share_link
[β] https://drive.google.com/file/d/1izoE7bLScQODYloV5B6bwzWtJ4jcqp1K/view?usp=sharing
[β] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[β] https://drive.google.com/drive/folders/1FY3lAkAisOG_RIUBuaynz1OjBkzjH5LL?usp=sharing
[β] tutorial-tracking.md
[β] https://choosealicense.com/licenses/mit/
[β] ../images/omnitrax_logo.svg#gh-dark-mode-only
[β] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[β] ../images/single_ant_1080p_POSE_track_0.gif
[β] ../images/single_ant_1080p_POSE_track_0_skeleton.gif
[β] ../images/VID_20220201_160304_50%25_POSE_fullframe.gif
[β] ../images/multi_ants_online_tracking_&_pose_estimation.gif
[β] ../images/Human_Tracking.gif
[β] ../images/Human_POSE_fullframe.gif
23 links checked.
FILE: paper/paper.md
[β] ../images/omnitrax_demo_screen.jpg
1 links checked.
Lucas, from your JOSS submission, again.
The package so far looks awesome, to be honest, but it took me a while to get my hands on a proper machine to install it. What are your thoughts / what would the main issues be with making at least the CPU version compatible with other operating systems (ie MacOS, Linux)?
Best!
Hi,
I followed the multi animal pose estimation tutorial, very fun! But I got a bit confused about the following:
I set Pose (input) frame size (px) to match the constant detection sizes in the detector panel (both 400 px), but still one of the pose estimation crops (track_5) showed a very zoomed-in ant. Am I misunderstanding the constant detector size parameter? Is this expected? See screenshot below.
Below some additional suggestions for the tutorial:
Would it be possible to suggest a constant detection size for the sample data provided? I used 400 px which seemed to cover all animals in full.
Maybe it could be nice to add some tips on how to transform the bodyparts' coordinates from the cropped video space to the full video space using the exported data - I would expect most users would end up doing this. If this is an existing script maybe it can be linked here.
In the detector panel: are both "constant detection size" and "minimum detection size" only enabled if the constant detection sizes checkbox is selected? If so, could this be clarified? Maybe they could be greyed out if the checkbox is not true, or just a text clarification.
Thanks!
Status | Count |
---|---|
π Total | 197 |
β Successful | 191 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 4 |
Status | Count |
---|---|
π Total | 197 |
β Successful | 191 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 4 |
I installed the addon as administrator as instructed and used absolute paths for the pretrained yolo network.
The addon shows a ticked checkbox which the instructions mention means a successful install.
Any attempt to track a video on the motion tracking workspace results in a crash.
CPU crash mentions it can't find some darknet cpu dll and gpu crashes to desktop.
Sorry for not being specific about the dll, i uninstalled blender for now and don't have that log.
Booting blender in debug mode shows me no related errors.
May I ask if you could also run your codebase on ruff or flake8? They would give more tips on how to improve the entire code/codebase. :)
pip install ruff
ruff check .
or
pip install flake8
flake8 .
You can also use ruff to fix any formatting errors.
ruff check . --fix # Lint all files in the current directory, and fix any fixable errors.
Originally posted by @rizarae-p in #26 (comment)
According to JOSS policy:
In the exceptional case a JOSS submission contains original data on animal research, the corresponding author must confirm that the data was collected in accordance with the latest guidelines and applicable regulations
Does this need to be added? π€
I am not sure if the sample data shared in the repo is "original" (meaning, not published elsewhere), or if this applies to invertebrates, but it may just be adding a sentence to the readme... I saw in the replicAnt paper there was no need for such statement so not sure it applies.
Status | Count |
---|---|
π Total | 197 |
β Successful | 194 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 1 |
Status | Count |
---|---|
π Total | 197 |
β Successful | 191 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 4 |
At step 5 ("Masking regions of interest (OPTIONAL)") at:
https://github.com/FabianPlum/OmniTrax/blob/main/docs/tutorial-tracking.md
it appears that having a mask is not optional but a requirement in 0.2.2, because tracking process won't start without one. When clicking the track button the blender console will simply say:
Running inference on: [LogicalDevice(name='/device:CPU:0', device_type='CPU'), LogicalDevice(name='/device:GPU:0', device_type='GPU')]
INFO: Initialised darknet network found!
'bpy_prop_collection[key]: key "Mask" not found'
No mask found!
And then it stops.
Status | Count |
---|---|
π Total | 197 |
β Successful | 191 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 4 |
Is there any trained model for human body tracking?
The following are just suggestions but I think they could make it easier for external people to contribute. Feel free to take them or leave them as you please!
I found the __init__.py
file to be quite bloated - it makes it a bit difficult to inspect and contribute. I would suggest to refactor the classes into separate modules and only keep the register / unregister functions in the init file. We followed that approach in this project if you want to have a look. We also separated operators, properties and UI components into separate modules (and separately for different subpackages).
Maybe consider having a few subpackages to group some of the modules that exist currently in root - this may make it easier for a contributor to identify at a glance which parts of the code are relevant for a specific feature/bug. For example, a tracking
subpackage could include the modulestracker.py
, yolo_tracker.py
and kalman_filter_new.py
maybe? Or maybe the cuda and package checks could be similarly grouped?
Hi @FabianPlum! I've read through the paper, but I noticed there wasn't much discussion on this topic. Could you possibly include a bit more discussion on this particular point?
State of the field: Do the authors describe how this software compares to other commonly-used packages?
Thank you!
Status | Count |
---|---|
π Total | 197 |
β Successful | 194 |
β³ Timeouts | 0 |
π Redirected | 0 |
π» Excluded | 2 |
β Unknown | 0 |
π« Errors | 1 |
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.