Coder Social home page Coder Social logo

fabianplum / omnitrax Goto Github PK

View Code? Open in Web Editor NEW
28.0 4.0 4.0 87.18 MB

Deep learning-driven multi animal tracking and pose estimation add-on for Blender

License: MIT License

Python 91.45% Jupyter Notebook 7.00% TeX 1.55%
blender deeplabcut deeplearning object-detection tracking yolo

omnitrax's Introduction

latest-release license made-with-python Build Status status DOI

Deep learning-based multi animal tracking and pose estimation Blender Add-on.


 

automated multi animal tracking example (trained on synthetic data)

OmniTrax is an open-source Blender Add-on designed for deep learning-driven multi-animal tracking and pose-estimation. It leverages recent advancements in deep-learning-based detection (YOLOv3, YOLOv4) and computationally inexpensive buffer-and-recover tracking techniques. OmniTrax integrates with Blender's internal motion tracking pipeline, making it an excellent tool for annotating and analyzing large video files containing numerous freely moving subjects. Additionally, it integrates DeepLabCut-Live for marker-less pose estimation on arbitrary numbers of animals, using both the DeepLabCut Model Zoo and custom-trained detector and pose estimator networks.

OmniTrax is designed to be a plug-and-play toolkit for biologists to facilitate the extraction of kinematic and behavioural data of freely moving animals. OmniTrax can, for example, be used in population monitoring applications, especially, in changing environments where background subtraction methods may fail. This ability can be amplified by using detection models trained on highly variable synthetically generated data. OmniTrax also lends itself well to annotating training and validation data for detector & tracker neural networks, or providing instance and pose data for size classification and unsupervised behavioural clustering tasks.

Pose estimation and skeleton overlay example (trained on synthetic data)

OmniTrax : Multi-Animal Tracking Demo

Operating System Support

Important

OmniTrax runs on both Windows 10 / 11 as well as Ubuntu systems. However, the installation and CPU vs GPU inference support differs, as well as which Blender version needs to be installed to ensure compatibility of dependencies.

Operating System Blender Version CPU inference GPU inference
Windows 10 / 11 3.3 X X
Ubuntu 18.04 / 20.04 2.92 X

Installation Guide

Requirements / Notes

  • OmniTrax GPU is currently only supported on Windows 10 / 11. For Ubuntu support on CPU, use Blender version 2.92.0 and skip the steps on CUDA installation.
  • download and install Blender LTS 3.3 to match dependencies. If you are planning on running inference on your CPU instead (which is considerably slower) use Blender version 2.92.0.
  • As we are using tensorflow 2.7, to run inference on your GPU, you will need to install CUDA 11.2 and cudNN 8.1. Refer to this official guide for version matching and installation instructions.
  • When installing the OmniTrax package, you need to run Blender in administrator mode (on Windows). Otherwise, the additional required python packages may not be installable.

Step-by-step installation

  1. Install Blender LTS 3.3 from the official website. Simply download blender-3.3.1-windows-x64.msi and follow the installation instructions.

Tip

If you are new to using blender, have a look at the official Blender docs to learn how to set up a workspace and arrange different types of editor windows.

  1. Install CUDA 11.2 and cudNN 8.1.0. Here, we provide a separate CUDA installation guide.

    • For advanced users: If you already have a separate CUDA installation on your system, make sure to additionally install 11.2 and update your PATH environment variable. Conflicting versions may mean that OmniTrax is unable to find your GPU which may lead to unexpected crashes.
  2. Download the latest release latest-release of OmniTrax. No need to unzip the file! You can install it straight from the Blender > Preferences > Add-on menu in the next step.

  3. Open Blender in administrator mode. You only need to do this once, during the installation of OmniTrax. Once everything is up and running you can open Blender normally in the future.

  1. Open the Blender system console to see the installation progress and display information.

Tip

In Ubuntu this option is missing. In order to display this type of information, you need to launch blender from the terminal directly and this terminal will display equivalent information while using blender.

  1. Next, open (1) Edit > (2) Preferences... and under Add-ons click on (3) Install.... Then, locate the downloaded (4) omni_trax.zip file, select it, and click on (5) Install Add-on.

  1. The omni_trax Add-on should now be listed. Then, enabling the Add-on will start the installation process of all required python dependencies.

The installation will take quite a while, so have a look at the System Console to see the progress. Grab a cup of coffee (or tea) in the meantime.

There may be a few warnings displayed throughout the installation process, however, as long as no errors occur, all should be good. If the installation is successful, a check mark will be displayed next to the Add-on and the console should let you know that "[...] all looks good here!". Once the installation is completed, you can launch blender with regular user-privileges.

A quick test drive (Detection & Tracking)

For a more detailed guide, refer to the Tracking and Pose-Estimation docs.

1. In Blender, with the OmniTrax Addon enabled, create a new Workspace from the VFX > Motion_Tracking tab.

2. Next, select your compute device. If you have a CUDA supported GPU (and the CUDA installation went as planned...), make sure your GPU is selected here, before running any of the inference functions, as the compute device cannot be changed at runtime. By default, assuming your computer has a one supported GPU, OmniTrax will select it as GPU_0.

3. Now it's time to load a trained YOLO network. In this example we are going to use a single class ant detector, trained on synthetically generated data. The YOLOv4 network can be downloaded here.

By clicking on the folder icon next to each cell, select the respective .cfg and .weights files. Here, we are using a network input resolution of 480 x 480. The same weights file can be used for all input resolutions.

Important

OmniTrax versions 0.2.x and later no longer require .data and .names files, making their provision optional. For more info on when you would need those files, refer to the extended Tracking tutorial.

Here you only need to set the path for

  • .cfg
  • .weights

Tip

After setting up your workspace, consider saving your project by pressing CTRL + S

Saving your project also saves your workspace, so in the future you can use this file to begin tracking right away!

4. Next, load a video you wish to analyse from your drive by clicking on Open (see image above). In this example we are using example_ant_recording.mp4.

5. Click on RESTART Track (or TRACK to continue tracking from a specific frame in the video). If you wish to stop the tracking process early, click on the video (which will open in a separate window) and press q to terminate the process.

OmniTrax will continue to track your video until it has either reached its last frame, or the End Frame (by default 250) which can be set in the Detection (YOLO) >> Processing settings.

Note

The ideal settings for the Detector and Tracker will always depend on your footage, especially on the relative animal size and movement speed. Remember, GIGO (Garbage In Garbage Out) so ensuring your recordings are evenly-lit, free from noise, flickering, and motion blur, will go a long way to improve inference quality. Refer to the full Tracking tutorial for an in-depth explanation of each setting.*

User guides

Trained networks and config files

We provide a number of trained YOLOv4 and DeepLabCut networks to get started with OmniTrax: trained_networks

Example Video Footage

Additionally, you can download a few of our video examples to get started with OmniTrax: example_footage

Upcoming feature additions

  • add option to exclude last N frames from tracking, so interpolated tracks do not influence further analysis
  • add bounding box stabilisation for YOLO detections (using moving averages for corner positions)
  • add option to exit pose estimation completely while running inference (important when the number of tracks is large)
  • add a progress bar for all tasks

Updates:

  • 14/03/2024 - Added release version 1.0.0 official release with the status software paper.
  • 05/12/2023 - Added release version 0.3.1 improved exception handling and stability.
  • 11/10/2023 - Added release version 0.3.0 minor fixes, major Ubuntu support! (well, on CPU at least)
  • 02/07/2023 - Added release version 0.2.3 fixing prior issues relating to masking and yolo path handling.
  • 26/03/2023 - Added release version 0.2.2 which adds support for footage masking and advanced sample export (see tutorial-tracking for details).
  • 28/11/2022 - Added release version 0.2.1 with updated YOLO and DLC-live model handling to accomodate for different file structures.
  • 09/11/2022 - Added release version 0.2.0 with improved DLC-live pose estimation for single and multi-animal applications.
  • 02/11/2022 - Added release version 0.1.3 which includes improved tracking from previous states, faster and more robust track transfer, building skeletons from DLC config files, improved package installation and start-up checks, a few bug fixes, and GPU compatibility with the latest release of Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
  • 06/10/2022 - Added release version 0.1.2 with GPU support for latest Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
  • 19/02/2022 - Added release version 0.1.1! Things run a lot faster now and I have added support for devices without dedicated GPUs.
  • 06/12/2021 - Added the first release version 0.1! Lots of small improvements and mammal fixes. Now, it no longer feels like a pre-release and we can all give this a try. Happy Tracking!
  • 29/11/2021 - Added pre-release version 0.0.2, with DeepLabCut-Live support, tested for Blender 2.92.0 only
  • 20/11/2021 - Added pre-release version 0.0.1, tested for Blender 2.92.0 only

References

When using OmniTrax and/or our other projects in your work, please make sure to cite them:

@article{Plum2024, 
     doi = {10.21105/joss.05549}, 
     url = {https://doi.org/10.21105/joss.05549}, 
     year = {2024}, 
     publisher = {The Open Journal}, 
     volume = {9}, 
     number = {95}, 
     pages = {5549}, 
     author = {Fabian Plum}, 
     title = {OmniTrax: A deep learning-driven multi-animal tracking and pose-estimation add-on for Blender}, journal = {Journal of Open Source Software} }

@article{Plum2023a,
    title = {replicAnt: a pipeline for generating annotated images of animals in complex environments using Unreal Engine},
    author = {Plum, Fabian and Bulla, René and Beck, Hendrik K and Imirzian, Natalie and Labonte, David},
    doi = {10.1038/s41467-023-42898-9},
    issn = {2041-1723},
    journal = {Nature Communications},
    url = {https://doi.org/10.1038/s41467-023-42898-9},
    volume = {14},
    year = {2023}
    }

License

© Fabian Plum, 2023 MIT License

omnitrax's People

Contributors

fabianplum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

omnitrax's Issues

[JOSS] Questions about dependencies

Some questions about dependencies came up while reviewing:

  • I was wondering if there is an alternative way of having the darknet/YOLO functionality with a PyPI package? My thinking is that otherwise it seems difficult to maintain (if they make updates we won't get them in omnitrax right?), but I'm not very familiar with vendoring. Would something like this tool help in that respect?
  • I was also wondering why a fork of the repo is used, rather than the original repo.
  • If I understand correctly we are installing all the dependencies along the Python interpreter that ships with Blender - could we have conflicts if we install other add-ons with different requirements? Should users be warned about this?

[JOSS] Suggestions on codebase structure

The following are just suggestions but I think they could make it easier for external people to contribute. Feel free to take them or leave them as you please!

  • I found the __init__.py file to be quite bloated - it makes it a bit difficult to inspect and contribute. I would suggest to refactor the classes into separate modules and only keep the register / unregister functions in the init file. We followed that approach in this project if you want to have a look. We also separated operators, properties and UI components into separate modules (and separately for different subpackages).

  • Maybe consider having a few subpackages to group some of the modules that exist currently in root - this may make it easier for a contributor to identify at a glance which parts of the code are relevant for a specific feature/bug. For example, a tracking subpackage could include the modulestracker.py, yolo_tracker.py and kalman_filter_new.py maybe? Or maybe the cuda and package checks could be similarly grouped?

Link Checker Report

Summary

Status Count
🔍 Total 197
✅ Successful 191
⏳ Timeouts 0
🔀 Redirected 0
👻 Excluded 2
❓ Unknown 0
🚫 Errors 4

Errors per input

Errors in README.md

Errors in docs/trained_networks.md

Errors in docs/tutorial-pose-estimation.md

Errors in docs/tutorial-tracking.md

V0.2.2 crash to desktop during tracking.

Describe the bug
V0.2.2 crash to desktop during tracking.

To Reproduce
Steps to reproduce the behavior:

  1. Install
  2. open up the single ant 1080p video.
    1080p DSLR recording (single-animal pose-estimation)
  3. configure tracking for GPU.
  4. load yolo network with absolute paths YOLOv4-COCO-20230501T235200Z-001
    80 Class model trained on COCO
  5. create simple tracking mask with 4 points around ant.
  6. hit track button.
  7. bbrrrrr crash.

Expected behavior
Just work :)

Screenshots
X

Desktop (please complete the following information):

  • OS: Win 10
  • Version 0.2.2
  • CUDA version 11.2
  • CUDnn version 8.1
  • Hardware GTX 1070, I7

Additional context
Blender refuses to log crashlogs. Open blender through command prompt to keep a console open.
Console notes a hardcoded path to a yolo file which does not exist, user "Plumstation" ?

LOG:

`INFO: successfully loaded OmniTrax
Found computational devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Read blend: K:\Blender3D\OmniTrax\Saved\OmniTrax1.blend
2023-05-05 00:48:18.303125: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-05 00:48:18.775515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6440 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1070 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1
Running inference on: [LogicalDevice(name='/device:CPU:0', device_type='CPU'), LogicalDevice(name='/device:GPU:0', device_type='GPU')]

INFO: Initialising darkent network...

<bpy_struct, MaskSpline at 0x000001B09A987308>
0.46663743257522583 0.849280059337616
0.6812906265258789 0.8212818503379822
0.7046225070953369 0.5366330146789551
0.4573046863079071 0.5646312832832336

[[[0.46663743257522583, 0.849280059337616], [0.6812906265258789, 0.8212818503379822], [0.7046225070953369, 0.5366330146789551], [0.4573046863079071, 0.5646312832832336]]]
[[503 162]
[735 193]
[760 500]
[493 470]]
Beginning counting from ID 0
INITIALISED TRACKER!
The imported clip: K:\Blender3D\OmniTrax\Saved..\SourceContent\Recordings\Insect_Ant\single_ant_1080p.mp4 has a total of 2000 frames.

Try to load cfg: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.cfg, weights: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights, clear = 0
0 : compute_capability = 610, cudnn_half = 0, GPU: NVIDIA GeForce GTX 1070 Ti
net.optimized_memory = 0
mini_batch = 1, batch = 8, time_steps = 1, train = 0
layer filters size/strd(dil) input output
0 Create CUDA-stream - 0
Create cudnn-handle 0
conv 32 3 x 3/ 1 512 x 512 x 3 -> 512 x 512 x 32 0.453 BF
1 conv 64 3 x 3/ 2 512 x 512 x 32 -> 256 x 256 x 64 2.416 BF
2 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
3 route 1 -> 256 x 256 x 64
4 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
5 conv 32 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 32 0.268 BF
6 conv 64 3 x 3/ 1 256 x 256 x 32 -> 256 x 256 x 64 2.416 BF
7 Shortcut Layer: 4, wt = 0, wn = 0, outputs: 256 x 256 x 64 0.004 BF
8 conv 64 1 x 1/ 1 256 x 256 x 64 -> 256 x 256 x 64 0.537 BF
9 route 8 2 -> 256 x 256 x 128
10 conv 64 1 x 1/ 1 256 x 256 x 128 -> 256 x 256 x 64 1.074 BF
11 conv 128 3 x 3/ 2 256 x 256 x 64 -> 128 x 128 x 128 2.416 BF
12 conv 64 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 64 0.268 BF
13 route 11 -> 128 x 128 x 128
14 conv 64 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 64 0.268 BF
15 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
16 conv 64 3 x 3/ 1 128 x 128 x 64 -> 128 x 128 x 64 1.208 BF
17 Shortcut Layer: 14, wt = 0, wn = 0, outputs: 128 x 128 x 64 0.001 BF
18 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
19 conv 64 3 x 3/ 1 128 x 128 x 64 -> 128 x 128 x 64 1.208 BF
20 Shortcut Layer: 17, wt = 0, wn = 0, outputs: 128 x 128 x 64 0.001 BF
21 conv 64 1 x 1/ 1 128 x 128 x 64 -> 128 x 128 x 64 0.134 BF
22 route 21 12 -> 128 x 128 x 128
23 conv 128 1 x 1/ 1 128 x 128 x 128 -> 128 x 128 x 128 0.537 BF
24 conv 256 3 x 3/ 2 128 x 128 x 128 -> 64 x 64 x 256 2.416 BF
25 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
26 route 24 -> 64 x 64 x 256
27 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
28 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
29 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
30 Shortcut Layer: 27, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
31 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
32 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
33 Shortcut Layer: 30, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
34 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
35 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
36 Shortcut Layer: 33, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
37 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
38 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
39 Shortcut Layer: 36, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
40 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
41 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
42 Shortcut Layer: 39, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
43 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
44 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
45 Shortcut Layer: 42, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
46 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
47 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
48 Shortcut Layer: 45, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
49 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
50 conv 128 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 128 1.208 BF
51 Shortcut Layer: 48, wt = 0, wn = 0, outputs: 64 x 64 x 128 0.001 BF
52 conv 128 1 x 1/ 1 64 x 64 x 128 -> 64 x 64 x 128 0.134 BF
53 route 52 25 -> 64 x 64 x 256
54 conv 256 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 256 0.537 BF
55 conv 512 3 x 3/ 2 64 x 64 x 256 -> 32 x 32 x 512 2.416 BF
56 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
57 route 55 -> 32 x 32 x 512
58 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
59 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
60 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
61 Shortcut Layer: 58, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
62 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
63 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
64 Shortcut Layer: 61, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
65 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
66 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
67 Shortcut Layer: 64, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
68 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
69 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
70 Shortcut Layer: 67, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
71 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
72 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
73 Shortcut Layer: 70, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
74 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
75 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
76 Shortcut Layer: 73, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
77 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
78 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
79 Shortcut Layer: 76, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
80 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
81 conv 256 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 256 1.208 BF
82 Shortcut Layer: 79, wt = 0, wn = 0, outputs: 32 x 32 x 256 0.000 BF
83 conv 256 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 256 0.134 BF
84 route 83 56 -> 32 x 32 x 512
85 conv 512 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 512 0.537 BF
86 conv 1024 3 x 3/ 2 32 x 32 x 512 -> 16 x 16 x1024 2.416 BF
87 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
88 route 86 -> 16 x 16 x1024
89 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
90 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
91 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
92 Shortcut Layer: 89, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
93 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
94 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
95 Shortcut Layer: 92, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
96 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
97 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
98 Shortcut Layer: 95, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
99 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
100 conv 512 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x 512 1.208 BF
101 Shortcut Layer: 98, wt = 0, wn = 0, outputs: 16 x 16 x 512 0.000 BF
102 conv 512 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.134 BF
103 route 102 87 -> 16 x 16 x1024
104 conv 1024 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x1024 0.537 BF
105 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
106 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
107 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
108 max 5x 5/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.003 BF
109 route 107 -> 16 x 16 x 512
110 max 9x 9/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.011 BF
111 route 107 -> 16 x 16 x 512
112 max 13x13/ 1 16 x 16 x 512 -> 16 x 16 x 512 0.022 BF
113 route 112 110 108 107 -> 16 x 16 x2048
114 conv 512 1 x 1/ 1 16 x 16 x2048 -> 16 x 16 x 512 0.537 BF
115 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
116 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
117 conv 256 1 x 1/ 1 16 x 16 x 512 -> 16 x 16 x 256 0.067 BF
118 upsample 2x 16 x 16 x 256 -> 32 x 32 x 256
119 route 85 -> 32 x 32 x 512
120 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
121 route 120 118 -> 32 x 32 x 512
122 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
123 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
124 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
125 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
126 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
127 conv 128 1 x 1/ 1 32 x 32 x 256 -> 32 x 32 x 128 0.067 BF
128 upsample 2x 32 x 32 x 128 -> 64 x 64 x 128
129 route 54 -> 64 x 64 x 256
130 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
131 route 130 128 -> 64 x 64 x 256
132 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
133 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
134 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
135 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
136 conv 128 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 128 0.268 BF
137 conv 256 3 x 3/ 1 64 x 64 x 128 -> 64 x 64 x 256 2.416 BF
138 conv 255 1 x 1/ 1 64 x 64 x 256 -> 64 x 64 x 255 0.535 BF
139 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.20
nms_kind: greedynms (1), beta = 0.600000
140 route 136 -> 64 x 64 x 128
141 conv 256 3 x 3/ 2 64 x 64 x 128 -> 32 x 32 x 256 0.604 BF
142 route 141 126 -> 32 x 32 x 512
143 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
144 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
145 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
146 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
147 conv 256 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 256 0.268 BF
148 conv 512 3 x 3/ 1 32 x 32 x 256 -> 32 x 32 x 512 2.416 BF
149 conv 255 1 x 1/ 1 32 x 32 x 512 -> 32 x 32 x 255 0.267 BF
150 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000
151 route 147 -> 32 x 32 x 256
152 conv 512 3 x 3/ 2 32 x 32 x 256 -> 16 x 16 x 512 0.604 BF
153 route 152 116 -> 16 x 16 x1024
154 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
155 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
156 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
157 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
158 conv 512 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 512 0.268 BF
159 conv 1024 3 x 3/ 1 16 x 16 x 512 -> 16 x 16 x1024 2.416 BF
160 conv 255 1 x 1/ 1 16 x 16 x1024 -> 16 x 16 x 255 0.134 BF
161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 91.095
avg_outputs = 757643
Allocate additional workspace_size = 9.44 MB
Try to load weights: K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights
Loading weights from K:\Blender3D\OmniTrax\Networks\YOLOv4-COCO-20230501T235200Z-001\yolov4.weights...
seen 64, trained: 0 K-images (0 Kilo-batches_64)
Done! Loaded 162 layers from weights-file
Couldn't open file: C:/Users/PlumStation/Desktop/OmniTrax_Testing/YOLOv4-COCO/coco.names
Error: Not freed memory blocks: 56843, total unfreed memory 21.423847 MB
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.
Freeing memory after the leak detector has run. This can happen when using static variables in C++ that are defined outside of functions. To fix this error, use the 'construct on first use' idiom.`

Broken unit test imports

          Hi Fabi,

The codebase is looking great! I think it's gonna be very useful for a lot of people.

I just had a quick go at running the tests locally in a linux machine (thanks for adding the steps! 🙌 ), and I got an error that looks like a refactoring side effect (omni_trax.utils not found). I didn't dig further but I figured I'd let you know just in case.

Originally posted by @sfmig in #26 (comment)

[JOSS] Ubuntu installation error

I followed the instructions in the guide for installation in Ubuntu, but unfortunately got an error during the the test drive

When hitting TRACK I got:

OSError: /home/sminano/.config/blender/2.92/scripts/addons/omni_trax/darknet/libdarknet.so: cannot open shared object file: No such file or directory

Using Ubuntu 20.04 (and Blender 2.92.0).

Let me know if any more info is required!

[JOSS] Multi animal pose estimation tutorial

Hi,

I followed the multi animal pose estimation tutorial, very fun! But I got a bit confused about the following:
I set Pose (input) frame size (px) to match the constant detection sizes in the detector panel (both 400 px), but still one of the pose estimation crops (track_5) showed a very zoomed-in ant. Am I misunderstanding the constant detector size parameter? Is this expected? See screenshot below.

image

Below some additional suggestions for the tutorial:

  • Would it be possible to suggest a constant detection size for the sample data provided? I used 400 px which seemed to cover all animals in full.

  • Maybe it could be nice to add some tips on how to transform the bodyparts' coordinates from the cropped video space to the full video space using the exported data - I would expect most users would end up doing this. If this is an existing script maybe it can be linked here.

  • In the detector panel: are both "constant detection size" and "minimum detection size" only enabled if the constant detection sizes checkbox is selected? If so, could this be clarified? Maybe they could be greyed out if the checkbox is not true, or just a text clarification.

Thanks!

[JOSS] Animal research statement?

According to JOSS policy:

In the exceptional case a JOSS submission contains original data on animal research, the corresponding author must confirm that the data was collected in accordance with the latest guidelines and applicable regulations

Does this need to be added? 🤔

I am not sure if the sample data shared in the repo is "original" (meaning, not published elsewhere), or if this applies to invertebrates, but it may just be adding a sentence to the readme... I saw in the replicAnt paper there was no need for such statement so not sure it applies.

Link Checker Report

Summary

Status Count
🔍 Total 197
✅ Successful 191
⏳ Timeouts 0
🔀 Redirected 0
👻 Excluded 2
❓ Unknown 0
🚫 Errors 4

Errors per input

Errors in docs/trained_networks.md

Errors in docs/tutorial-pose-estimation.md

Errors in README.md

Errors in docs/tutorial-tracking.md

[V0.2.2] "Start tracking" crashes on CPU and GPU

I installed the addon as administrator as instructed and used absolute paths for the pretrained yolo network.
The addon shows a ticked checkbox which the instructions mention means a successful install.
Any attempt to track a video on the motion tracking workspace results in a crash.
CPU crash mentions it can't find some darknet cpu dll and gpu crashes to desktop.
Sorry for not being specific about the dll, i uninstalled blender for now and don't have that log.
Booting blender in debug mode shows me no related errors.

[JOSS Review] Software Paper: State of the field

Hi @FabianPlum! I've read through the paper, but I noticed there wasn't much discussion on this topic. Could you possibly include a bit more discussion on this particular point?

State of the field: Do the authors describe how this software compares to other commonly-used packages?

Thank you!

[JOSS] Automated tests

I wasn't able to run the test locally in my machine - would it be possible to include steps for this?

A good place for this could be in the CONTRIBUTING.md file, since after implementing a feature one would want to run the tests to check nothing broke.

It would be nice to also add other dependencies required for developing, maybe with a requirements-dev.txt file (to run pip install -r requirements-dev.txt in the desired environment).

Link Checker Report

Summary

Status Count
🔍 Total 197
✅ Successful 191
⏳ Timeouts 0
🔀 Redirected 0
👻 Excluded 2
❓ Unknown 0
🚫 Errors 4

Errors per input

Errors in docs/tutorial-tracking.md

Errors in README.md

Errors in docs/tutorial-pose-estimation.md

Errors in docs/trained_networks.md

[JOSS] Missing contributing guidelines

Hey there! One of your JOSS reviewers here. While the readme file is solid, the package seems to be missing clear contribution guidelines for people who would like to send pull requests.

I would strongly recommend adding one (you can get a pretty good idea on what to include here).

Best!
Lucas

[V0.2.2] Creating a mask would be optional, but it isn't.

At step 5 ("Masking regions of interest (OPTIONAL)") at:
https://github.com/FabianPlum/OmniTrax/blob/main/docs/tutorial-tracking.md

it appears that having a mask is not optional but a requirement in 0.2.2, because tracking process won't start without one. When clicking the track button the blender console will simply say:

Running inference on:  [LogicalDevice(name='/device:CPU:0', device_type='CPU'), LogicalDevice(name='/device:GPU:0', device_type='GPU')]

INFO: Initialised darknet network found!

'bpy_prop_collection[key]: key "Mask" not found'
No mask found!

And then it stops.

[JOSS] Feedback on installation instructions

Thanks for the detailed instructions, especially with the tricky CUDA bits!

I really liked that enabling the addon nicely installs all the dependencies, and the screenshots and detailed steps will definitely make the tool accessible to a wider range of users.

I have some suggestions but none are vital, so feel free to take/leave them as you see fit.

  • #13

  • #12

    • here we did it with a bash and a Python script, but there may be better ways.
  • #14

    • Since it is a bit lengthy, maybe it's a good idea to have a quick-guide in the README. This would just have the main steps (Blender, cuda, omnitrax addon, test run) and would link to a longer version with further details inside the docs.
    • Seeing the updates before the installation feels a bit confusing, since the first time you read it you lack sufficient context.
    • Maybe also adding separate sections for platform support (maybe as a matrix also showing GPU support?) and dependencies could make it all a bit clearer at a glance.
  • #15

    • I would suggest including a pointer to the Blender docs on how to organise areas in a workspace.
    • It would also be nice to add a tip on how to define a workspace that persists. If people are gonna use this frequently for different projects it seems finicky to set up the workspace every time you launch Blender. I think workspaces are actually linked to a .blend file, so you'd need to define them as part of a Startup file (see here)
    • You can use markdown alerts for tips, they render nicely in Github.
  • #16

Hope this helps!

[JOSS] Data sharing - broken links

This issue is part of the JOSS review for this repo.

The repo provides a lot of very useful data, like sample footage and trained models, but I found that some of the links to the data were broken (specifically in docs/example_footage.md and docs/tutorial-tracking.md).

I used this precommit hook as a quick way to check all links in markdown files, below the summary output:
image

I leave you the full output below for reference (it may be useful to pin down the specific links).

Full output Markdown Link Check......................................................Failed - hook id: markdown-link-check - exit code: 1

(node:47806) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ... to show where the warning was created)

FILE: docs/CUDA_installation_guide.md
[✓] https://en.wikipedia.org/wiki/CUDA
[✓] https://github.com/AlexeyAB/darknet
[✓] https://github.com/DeepLabCut/DeepLabCut-live
[✓] https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
[✓] https://developer.nvidia.com/rdp/cudnn-archive
[✓] ../images/omnitrax_logo.svg#gh-dark-mode-only
[✓] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[✓] CUDA_installation_images/CUDA_01.PNG
[✓] CUDA_installation_images/CUDA_02.PNG
[✓] CUDA_installation_images/CUDA_03.PNG
[✓] CUDA_installation_images/CUDA_04.PNG
[✓] CUDA_installation_images/CUDA_05.PNG
[✓] CUDA_installation_images/CUDA_06.PNG
[✓] CUDA_installation_images/CUDA_07.PNG
[✓] CUDA_installation_images/CUDA_08.PNG
[✓] CUDA_installation_images/CUDA_09.PNG
[✓] CUDA_installation_images/CUDA_10.PNG
[✓] CUDA_installation_images/CUDA_11.PNG
[✓] CUDA_installation_images/CUDA_12.PNG
[✓] CUDA_installation_images/CUDA_13.PNG
[✓] CUDA_installation_images/CUDA_14.PNG

21 links checked.

FILE: .github/CONTRIBUTING.md
[✓] #introduction
[✓] #getting-started
[✓] #contributing
[✓] #reporting-bugs
[✓] #suggesting-enhancements
[✓] #code-contributions
[✓] #contact
[✓] mailto:[email protected]
[✓] https://github.com/FabianPlum/OmniTrax/tree/main/docs
[✓] https://github.com/FabianPlum/OmniTrax/issues
[✓] https://github.com/FabianPlum/OmniTrax/issues/new/choose
[✖] https://github.com/omnitrax/omnitrax

12 links checked.

ERROR: 1 dead links found!
[✖] https://github.com/omnitrax/omnitrax → Status: 404

FILE: docs/example_footage.md
[✓] ../README.md
[✓] https://drive.google.com/file/d/1I0vla-CyTYpNIKNRJIzegxJ44WGyQ291/view?usp=share_link
[✓] https://drive.google.com/file/d/1f417gbG7nt3xMIfKZgmr-3gPEUm_DPoJ/view?usp=share_link
[✓] https://drive.google.com/file/d/1pa4hD-64JroByLavQZCvigMs7RGVxyvs/view?usp=share_link
[✓] https://drive.google.com/file/d/1n-SRw7hswtMpaaXoGLuu_i1SFPgCBKoh/view?usp=share_link
[✓] https://drive.google.com/file/d/1esvN2C4Egto_kZFWg5qGsETVphaa3aSi/view?usp=share_link
[✓] https://drive.google.com/file/d/1rn4WUGyh8gotdC_UuVHIhqlqVn6sOhob/view?usp=share_link
[✖] https://drive.google.com/file/d/10d2YuEpx62UOU8oQ1179XVxuKZOCbUZ5/view?usp=share_link
[✓] https://drive.google.com/file/d/1X5fNkaEkALo1lgAu4HsKgzyZSAEapIq_/view?usp=share_link
[✓] https://drive.google.com/file/d/109u6MyJFlLaiHaf08OavPWS8I6KvmPee/view?usp=share_link
[✓] https://drive.google.com/file/d/1izoE7bLScQODYloV5B6bwzWtJ4jcqp1K/view?usp=sharing
[✓] https://drive.google.com/file/d/1XzZmgkBUKeA3Q1YeYGMbwQoqsYtYgcjF/view?usp=sharing
[✓] https://drive.google.com/drive/folders/14wBXFhV1KI4nD_TZXTrZssXqdOsWuDwk?usp=sharing
[✓] ../images/omnitrax_logo.svg#gh-dark-mode-only
[✓] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[✓] ../images/preview_tracking.gif
[✓] ../images/multi_ants_online_tracking_&_pose_estimation.gif

17 links checked.

ERROR: 1 dead links found!
[✖] https://drive.google.com/file/d/10d2YuEpx62UOU8oQ1179XVxuKZOCbUZ5/view?usp=share_link → Status: 404

FILE: .github/ISSUE_TEMPLATE/bug_report.md
No hyperlinks found!

0 links checked.
(node:47807) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ... to show where the warning was created)

FILE: docs/tutorial-tracking.md
[✓] https://github.com/DeepLabCut/DeepLabCut-live
[✓] https://www.mackenziemathislab.org/dlc-modelzoo
[✓] trained_networks.md
[✓] example_footage.md
[✓] https://github.com/AlexeyAB/darknet
[✓] https://github.com/DeepLabCut/DeepLabCut
[✓] ../README.md
[✓] ../images/example_ant_recording.mp4
[✓] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[✓] https://www.blender.org/download/lts/3-3/
[✓] CUDA_installation_guide.md
[✓] https://github.com/FabianPlum/FARTS
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.1
[✓] tutorial-pose-estimation.md
[✓] https://en.wikipedia.org/wiki/Kalman_filter
[✓] https://en.wikipedia.org/wiki/Hungarian_algorithm
[✓] https://github.com/FabianPlum/blenderMotionExport
[✓] https://github.com/Amudtogal
[✓] ../example_scripts/Tracking_Dataset_Processing.ipynb
[✖] ..images/example_ant_recording.mp4
[✓] ../example_scripts/example_ant_recording
[✓] https://choosealicense.com/licenses/mit/
[✓] ../images/omnitrax_logo.svg#gh-dark-mode-only
[✓] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[✓] ../images/use_01.jpg
[✓] ../images/use_02.jpg
[✓] ../images/use_03.jpg
[✓] ../images/masking_01.png
[✓] ../images/masking_02.png
[✓] ../images/masking_03.png
[✓] ../images/use_04.gif
[✓] ../images/example_ant_tracked.gif
[✖] ../example_scripts/_heatmap_of_ground_truth_tracks.svg
[✓] ../images/ase_01.jpg
[✓] ../images/ase_new_02.jpg

35 links checked.

ERROR: 2 dead links found!
[✖] ..images/example_ant_recording.mp4 → Status: 400
[✖] ../example_scripts/_heatmap_of_ground_truth_tracks.svg → Status: 400

FILE: docs/trained_networks.md
[✓] https://github.com/AlexeyAB/darknet
[✓] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[✓] https://drive.google.com/drive/folders/11QXseJwISdodSnXJV6fwM97XfT2aXx2y?usp=sharing
[✓] https://drive.google.com/drive/folders/1wQcfLlDUvnWthyzbvyVy9oqyTZ2F-JFo?usp=sharing
[✓] https://drive.google.com/drive/folders/1U9jzOpjCcu6wDfTEH3uQqGKPxW_QzHGz?usp=sharing
[✓] https://drive.google.com/drive/folders/1eXAowtyBsqGEjvmQE1YlSeHJ6AGBwpUs?usp=share_link
[✓] https://github.com/AlexeyAB/darknet/wiki/YOLOv4-model-zoo
[✓] https://github.com/DeepLabCut/DeepLabCut
[✓] https://drive.google.com/drive/folders/1or1TF3tvi1iIzldEAia3G2RNKY5J7Qz4?usp=sharing
[✓] https://drive.google.com/drive/folders/1FY3lAkAisOG_RIUBuaynz1OjBkzjH5LL?usp=sharing
[✓] https://drive.google.com/file/d/1IH9R9PgJMYteigsrMi-bZnz4IMcydtWU/view?usp=sharing
[✓] https://drive.google.com/drive/folders/1-DHkegHiTkWbO7YboXxDC5tU4Aa71-9z?usp=share_link
[✓] https://drive.google.com/drive/folders/1BLulUYkwww7SfzXgSSVM71GLI4dQysP5?usp=share_link
[✓] https://arxiv.org/abs/1605.03170
[✓] https://www.mackenziemathislab.org/dlc-modelzoo
[✓] tutorial-pose-estimation.md
[✓] https://choosealicense.com/licenses/mit/
[✓] ../images/omnitrax_logo.svg#gh-dark-mode-only
[✓] ../images/omnitrax_logo_light.svg#gh-light-mode-only

19 links checked.

FILE: .github/ISSUE_TEMPLATE/feature_request.md
No hyperlinks found!

0 links checked.

FILE: README.md
[✓] https://github.com/FabianPlum/OmniTrax/releases
[✓] https://github.com/FabianPlum/OmniTrax
[✓] https://www.python.org/
[✓] https://app.travis-ci.com/github/FabianPlum/OmniTrax
[✓] https://github.com/FabianPlum/FARTS
[✓] https://youtu.be/YXxM4QRaCDU
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.3.1
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.3.0
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.3
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.2
[✓] https://github.com/FabianPlum/OmniTrax/blob/main/docs/tutorial-tracking.md
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.1
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.2.0
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.3
[✓] https://www.blender.org/download/lts/3-3/
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.2
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1.1
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.1
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.0.2
[✓] https://github.com/DeepLabCut/DeepLabCut-live
[✓] https://github.com/FabianPlum/OmniTrax/releases/tag/V_0.0.1
[✓] https://download.blender.org/release/Blender2.92/
[✓] https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
[✓] https://developer.nvidia.com/rdp/cudnn-archive
[✓] https://www.tensorflow.org/install/source#gpu
[✓] https://www.blender.org/download/release/Blender3.3/blender-3.3.1-windows-x64.msi/
[✓] docs/CUDA_installation_guide.md
[✓] https://github.com/FabianPlum/OmniTrax/releases/download/V_0.2.3/omni_trax.zip
[✓] docs/tutorial-tracking.md
[✓] docs/tutorial-pose-estimation.md
[✓] https://github.com/AlexeyAB/darknet
[✓] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[✓] images/example_ant_recording.mp4
[✓] docs/trained_networks.md
[✓] docs/example_footage.md
[✓] https://choosealicense.com/licenses/mit/
[✓] https://img.shields.io/github/tag/FabianPlum/OmniTrax.svg?label=version&style=flat
[✓] https://img.shields.io/github/license/FabianPlum/OmniTrax.svg?style=flat
[✓] https://img.shields.io/badge/Made%20with-Python-1f425f.svg
[✓] https://app.travis-ci.com/FabianPlum/OmniTrax.svg?branch=main
[✓] images/omnitrax_logo.svg#gh-dark-mode-only
[✓] images/omnitrax_logo_light.svg#gh-light-mode-only
[✓] images/preview_tracking.gif
[✓] images/single_ant_1080p_POSE_track_0.gif
[✓] images/single_ant_1080p_POSE_track_0_skeleton.gif
[✓] images/omnitrax_demo_screen_updated.jpg
[✓] images/install_01.jpg
[✓] images/install_02.jpg
[✓] images/install_03.jpg
[✓] images/install_04.jpg
[✓] images/install_05.jpg
[✓] images/install_06.jpg
[✓] images/use_01.jpg
[✓] images/use_02.jpg
[✓] images/use_03.jpg
[✓] images/use_04.gif

56 links checked.
(node:47808) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ... to show where the warning was created)

FILE: docs/tutorial-pose-estimation.md
[✓] https://github.com/DeepLabCut/DeepLabCut-live
[✓] https://www.mackenziemathislab.org/dlc-modelzoo
[✓] trained_networks.md
[✓] example_footage.md
[✓] https://github.com/AlexeyAB/darknet
[✓] https://github.com/DeepLabCut/DeepLabCut
[✓] ../README.md
[✓] https://github.com/FabianPlum/FARTS
[✓] https://drive.google.com/file/d/156t8r3ZHrkzC72jZapFl9OBFPqNIvIXg/view?usp=share_link
[✓] https://drive.google.com/drive/folders/1-DHkegHiTkWbO7YboXxDC5tU4Aa71-9z?usp=share_link
[✓] https://drive.google.com/file/d/1izoE7bLScQODYloV5B6bwzWtJ4jcqp1K/view?usp=sharing
[✓] https://drive.google.com/drive/folders/1PSseMeClcYIe9dcYG-JaOD2CzYceiWdl?usp=sharing
[✓] https://drive.google.com/drive/folders/1FY3lAkAisOG_RIUBuaynz1OjBkzjH5LL?usp=sharing
[✓] tutorial-tracking.md
[✓] https://choosealicense.com/licenses/mit/
[✓] ../images/omnitrax_logo.svg#gh-dark-mode-only
[✓] ../images/omnitrax_logo_light.svg#gh-light-mode-only
[✓] ../images/single_ant_1080p_POSE_track_0.gif
[✓] ../images/single_ant_1080p_POSE_track_0_skeleton.gif
[✓] ../images/VID_20220201_160304_50%25_POSE_fullframe.gif
[✓] ../images/multi_ants_online_tracking_&_pose_estimation.gif
[✓] ../images/Human_Tracking.gif
[✓] ../images/Human_POSE_fullframe.gif

23 links checked.

FILE: paper/paper.md
[✓] ../images/omnitrax_demo_screen.jpg

1 links checked.

[JOSS] Functionality documentation: More verbose docstrings needed

Hi @FabianPlum ! 'm nearly finished with the checks, but I've noticed the codebase could greatly benefit from more descriptive docstrings and some reformatting. To enhance consistency and readability, I recommend choosing a single docstring style—either Numpy, Google, or PEP257—that you feel best suits your project.

For implementing these improvements, ruff can be a good tool.
You can configure ruff to enforce your chosen docstring style and other linting rules by creating a pyproject.toml file with the necessary settings.
When you run ruff check ., it will automatically apply these configurations.

For more detailed instructions on setting up and using ruff, please see: https://pypi.org/project/ruff/0.0.221/.

[JOSS Review] Documentation: Statement of need

Hi there! I believe the documentation could benefit from additional details regarding this topic, as mentioned in your software paper.

A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?

Thank you!

Multi-animal Pose Estimation Plot Skeleton is not working properly

Hi @FabianPlum! 😄

I tested MA pose estimation on Omitrax using the test data you provided:
OS: Ubuntu 20.04
Video : multiple_ants_1920x1080_01.mp4
YOLO Network : atta_single_class/yolov4-big_and_small_ants_320.cfg
DLC Network : DLC_ANT-POSE-MIXED_resnet_101_iteration-0_shuffle-1
With these parameters:
Screenshot from 2024-03-03 20-34-40

and got this:
Screenshot from 2024-03-03 20-27-00

Was wondering if this is also what you are getting?

Thank you 😄

[JOSS] OS compatibility

Lucas, from your JOSS submission, again.

The package so far looks awesome, to be honest, but it took me a while to get my hands on a proper machine to install it. What are your thoughts / what would the main issues be with making at least the CPU version compatible with other operating systems (ie MacOS, Linux)?

Best!

[JOSS] Single animal pose estimation tutorial

Hi,

I had a go at the pose estimation full-frame tutorial and am getting this error when selecting to export the data:

Using C:\Users\sminano\Downloads\VID_20220201_160304.mp4 for pose estimation...
Traceback (most recent call last):
  File "C:\Users\sminano\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\omni_trax\__init__.py", line 783, in execute
    pose_output_file.write(pose_joint_header_l1 + "\n")
UnboundLocalError: local variable 'pose_joint_header_l1' referenced before assignment
Error: Python: Traceback (most recent call last):
  File "C:\Users\sminano\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\omni_trax\__init__.py", line 783, in execute
    pose_output_file.write(pose_joint_header_l1 + "\n")
UnboundLocalError: local variable 'pose_joint_header_l1' referenced before assignment

This are my pose estimation parameters:
image

I used the sample video provided VID_20220201_160304.mp4

I also noticed the Blender file explorer that pops up when filling in the DLC model guides you to select a file, but actually we need to select only a folder. Maybe a clarification could be added to the tutorial, something like 'double click Accept to select the parent folder' ?

Code cleanup

          May I ask if you could also run your codebase on ruff or flake8? They would give more tips on how to improve the entire code/codebase. :)

pip install ruff
ruff check .
or
pip install flake8
flake8 .

You can also use ruff to fix any formatting errors.
ruff check . --fix # Lint all files in the current directory, and fix any fixable errors.

Originally posted by @rizarae-p in #26 (comment)

Link Checker Report

Summary

Status Count
🔍 Total 197
✅ Successful 191
⏳ Timeouts 0
🔀 Redirected 0
👻 Excluded 2
❓ Unknown 0
🚫 Errors 4

Errors per input

Errors in docs/tutorial-tracking.md

Errors in README.md

Errors in docs/tutorial-pose-estimation.md

Errors in docs/trained_networks.md

[JOSS] Contributing guidelines

Hi,

Below some suggestions on the contributing guidelines - IMO only the first one would be a required change.

  • I found odd that the CONTRIBUTING.md file is under the .github directory, I would suggest moving it to the root directory of the project.
  • In the paper you mention you encourage the community to contribute with trained models and datasets - maybe it would be good to mention that here too, and provide more specific details? For example, what kind of files to shares and where (some open data repository?). Maybe it would be nice to have here a list of shared models and datasets available to use in Omnitrax.
  • It could be nice to include links to resources in the Getting started section (e.g., a getting started with Blender tutorial).

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.