Coder Social home page Coder Social logo

dlc2nwb's Introduction

Welcome! 👋

DeepLabCut™️ is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. Read a short development and application summary below.

Please click the link above for all the information you need to get started! Please note that currently we support only Python 3.10+ (see conda files for guidance).

Developers Stable Release:

  • Very quick start: You need to have TensorFlow installed (up to v2.10 supported across platforms) pip install "deeplabcut[gui,tf]" that includes all functions plus GUIs, or pip install deeplabcut[tf] (headless version with PyTorch and TensorFlow).

Developers Alpha Release:

We recommend using our conda file, see here or the new deeplabcut-docker package.

Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at Nature Protocols paper.

For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org

🐭 pose tracking of single animals demo Open in Colab

🐭🐭🐭 pose tracking of multiple animals demo Open in Colab

  • See more demos here. We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab.

Why use DeepLabCut?

In 2018, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to rats, humans, various fish species, bacteria, leeches, various robots, cheetahs, mouse whiskers and race horses. DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see Pretraining boosts out-of-domain robustness for pose estimation and Lauer et al 2022). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

Left: Due to transfer learning it requires little training data for multiple, challenging behaviors (see Mathis et al. 2018 for details). Mid Left: The feature detectors are robust to video compression (see Mathis/Warren for details). Mid Right: It allows 3D pose estimation with a single network and camera (see Mathis/Warren). Right: It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see Nath* and Mathis* et al. 2019).

DeepLabCut is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See DLC-Utils for some helper code.

Code contributors:

DLC code was originally developed by Alexander Mathis & Mackenzie Mathis, and was extended in 2.0 with the core dev team consisting of Tanmay Nath (2.0-2.1), and currently (2.1+) with Jessy Lauer and (2.3+) Niels Poulsen. DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the 100+ contributors. Please see AUTHORS for more details!

This is an actively developed package and we welcome community development and involvement.

Get Assistance & be part of the DLC Community✨:

🚉 Platform 🎯 Goal ⏱️ Estimated Response Time 📢 Support Squad
Image.sc forum
🐭Tag: DeepLabCut
To ask help and support questions👋 Promptly🔥 DLC Team and The DLC Community
GitHub DeepLabCut/Issues To report bugs and code issues🐛 (we encourage you to search issues first) 2-3 days DLC Team
Gitter To discuss with other users, share ideas and collaborate💡 2 days The DLC Community
GitHub DeepLabCut/Contributing To contribute your expertise and experience🙏💯 Promptly🔥 DLC Team
🚧 GitHub DeepLabCut/Roadmap To learn more about our journey✈️ N/A N/A
Twitter Follow To keep up with our latest news and updates 📢 Daily DLC Team
The DeepLabCut AI Residency Program To come and work with us next summer👏 Annually DLC Team

References:

If you use this code or data we kindly ask that you please cite Mathis et al, 2018 and, if you use the Python package (DeepLabCut2.x) please also cite Nath, Mathis et al, 2019. If you utilize the MobileNetV2s or EfficientNets please cite Mathis, Biasi et al. 2021. If you use versions 2.2beta+ or 2.2rc1+, please cite Lauer et al. 2022.

DOIs (#ProTip, for helping you find citations for software, check out CiteAs.org!):

Please check out the following references for more details:

@article{Mathisetal2018,
    title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
    author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe  and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
    journal = {Nature Neuroscience},
    year = {2018},
    url = {https://www.nature.com/articles/s41593-018-0209-y}}

 @article{NathMathisetal2019,
    title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
    author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
    journal = {Nature Protocols},
    year = {2019},
    url = {https://doi.org/10.1038/s41596-019-0176-0}}
    
@InProceedings{Mathis_2021_WACV,
    author    = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
    title     = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2021},
    pages     = {1859-1868}}
    
@article{Lauer2022MultianimalPE,
    title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
    author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and     Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
    journal={Nature Methods},
    year={2022},
    volume={19},
    pages={496 - 504}}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170}}

Review & Educational articles:

@article{Mathis2020DeepLT,
    title={Deep learning tools for the measurement of animal behavior in neuroscience},
    author={Mackenzie W. Mathis and Alexander Mathis},
    journal={Current Opinion in Neurobiology},
    year={2020},
    volume={60},
    pages={1-11}}

@article{Mathis2020Primer,
    title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
    author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
    journal={Neuron},
    year={2020},
    volume={108},
    pages={44-65}}

Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
    author = {Mathis, Alexander and Warren, Richard A.},
    title = {On the inference speed and video-compression robustness of DeepLabCut},
    year = {2018},
    doi = {10.1101/457242},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
    eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
    journal = {bioRxiv}}

License:

This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission.

SuperAnimal models are provided for research use only (non-commercial use).

Major Versions:

  • For all versions, please see here.

VERSION 3.0: A whole new experience with PyTorch🔥. While the high-level API remains the same, the backend and developer friendliness have greatly improved, along with performance gains!

VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience.

VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects).

VERSION 2.0-2.1: This is the Python package of DeepLabCut that was originally released in Oct 2018 with our Nature Protocols paper (preprint here). This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see panel b).

VERSION 1.0: The initial, Nature Neuroscience version of DeepLabCut can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11

News (and in the news):

💜 We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch

💜 The DeepLabCut Model Zoo launches SuperAnimals, see more here.

💜 DeepLabCut supports multi-animal pose estimation! maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the new 2.2+ releases for what's new & how to install it, please see our new paper, Lauer et al 2022, and the new docs on how to use it!

💜 We support multi-animal re-identification, see Lauer et al 2022.

💜 We have a real-time package available! http://DLClive.deeplabcut.org

dlc2nwb's People

Contributors

alexemg avatar bendichter avatar cbroz1 avatar codycbakerphd avatar h-mayorquin avatar jeylau avatar mmathislab avatar saksham20 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dlc2nwb's Issues

including skeleton

ndx-pose has a place for "edges", which I think maps to "skeleton" in DLC. However, the example config.yml does not contain skeleton information and the converter does not handle skeleton info. Including this information would allow us to provide much better visualizations of DLC output in NWB Widgets.

Having a new release to include append nwbfile mode

After you have merged #10 we were wondering if you could do a new release. That would allow us to indicate the users that they can use this new feature by using pip install dlc2nwb instead of telling them to install the dev branch from github.

Let us know if there is anything that we can help you to make this happen.

Request for new release

Hello all,

The latest release of deeplabcut triggered some minor breaks on downsteam packages due to the loosening of tensorflow in minimal requirements: catalystneuro/neuroconv#268

After checking the latest state of the main branch here, though, it seems like this may have been anticipated as of a few months ago by making the deeplabcut import here safer: https://github.com/DeepLabCut/DLC2NWB/blob/main/dlc2nwb/utils.py#L16-L17

However, it seems like the last release (Jul 29) was a few months before that.

Thus I'd like to request a new release of dlc2nwb so I can pin to the version using this safer import and bypass the need for tensorflow altogether.

Let me know if I'm missing something~

Cheers and happy holidays!

Next steps

  • include unit-tests for testing round-trip conversion
  • update usage in docs
  • expand description of DLC data + NWB data

Later:

  • put an example export in DLC cookbook
  • set up CI
  • add DLC multi animal project example

`opencv-python[-headless]` not installed automatically

In a fresh venv environment, opencv-python[-headless] does not automatically install opencv-python. Doing pip show opencv-python or pip show opencv-python-headless shows nothing.

This is the final output for pip install dlc2nwb inside a fresh venv:

Successfully installed attrs-23.1.0 dlc2nwb-0.3 h5py-3.9.0 hdmf-3.8.0 jsonschema-4.18.4 jsonschema-specifications-2023.7.1 ndx-pose-0.1.1 numpy-1.25.1 pandas-2.0.3 pynwb-2.4.0 python-dateutil-2.8.2 pytz-2023.3 referencing-0.30.0 rpds-py-0.9.2 ruamel-yaml-0.17.32 ruamel.yaml.clib-0.2.7 scipy-1.11.1 six-1.16.0 tzdata-2023.3

timestamps

This looks really good!

One issue I found is that the timestamps are integers, e.g. (1.0, 2.0, 3.0, ...). These should be the time in seconds with respect to the session start time. Does DLC track timing or does it simply go frame by frame without needing to know about the frame times? We may need to have an additional input argument to support this. We could input just a sampling rate, but I know that videos often have irregular sampling.

Another more minor thing is that if all the timestamps vectors are the same, we can do is create links between the timestamps so the values only need to be stored once in the file and the other timeseries object can point to it. It's easy to do this in PyNWB, but the syntax would be hard to guess. You do:

timeseries1 = TimeSeries(...)
timeseries2 = TimeSeries(..., timestamps=timeseries1)

Enable stand-alone conversion without DeepLabCut installed

Recently we had an issue to use this repository as the dependency on deeplabcut crashed our mac workflow (we think this might be related to the context of the following issue DeepLabCut/DeepLabCut#1430). While using state of the art deep learning libraries is a necessity for performing the powerful analysis that deeplabcut enables they are also know to have brittle installation process and introduce hard dependency management problems in the ecosystem. In that context, we think it might be useful to enable this repository to work as stand-alone post-processing tool on itself. That is, running a conversion pipeline in an environment that does not have deeplabcut as a dependency.

Two illustrate this need more concretely, consider the two following scenarios:

  • A research runs the deeplabcut processing pipeline in a workstation machine but then runs the data analysis or paper writing in another computer and wants to modify the writing-to-nwb pipeline quickly.
  • A researcher gets the deeplabcut data from a collaborator and is going to integrate this data in a pipeline that includes other modalities but does not have the environment where the initial analysis was carried out.

In the cases above, the results are already produced and this library role would be just transforming the data into nwb. Therefore, installing deeplabcut in those scenarios is unnecessary and, as discussed above, might be brittle.

I am opening an accompanying PR that achieves this with minimal changes.

Add optional parameters to `PoseEstimation`

As described in https://github.com/catalystneuro/neuroconv/issues/915, I would need to pass over a name value to PoseEstimation to support multiple DLC pose estimations in the same nwb file.

I am currently fixing it by piping an optional dictionary of additional kwargs from the args in this way:

def _write_pes_to_nwbfile(
   nwbfile,
   animal,
   df_animal,
   scorer,
   video,  # Expects this to be a tuple; first index is string path, second is the image shape as "0, width, 0, height"
   paf_graph,
   timestamps,
   exclude_nans,
   **optional_kwargs,
):  
   ...

   pe = PoseEstimation(
       pose_estimation_series=pose_estimation_series,
       description="2D keypoint coordinates estimated using DeepLabCut.",
       original_videos=[video[0]],
       # TODO check if this is a mandatory arg in ndx-pose (can skip if video is not found_
       dimensions=[list(map(int, video[1].split(",")))[1::2]],
       scorer=scorer,
       source_software="DeepLabCut",
       source_software_version=deeplabcut_version,
       nodes=[pes.name for pes in pose_estimation_series],
       edges=paf_graph if paf_graph else None,
       **optional_kwargs,
   )

If you would be open to support this fix, I can submit a new PR or add this solutions as well to #23, as you prefer!

Can't get movie timestamps due to TypeError: object of type 'cv2.VideoCapture' has no len()

It is impossible to retrieve movie timestamps due to dlc2nwb.utils.get_movietimestamps throwing TypeError: object of type 'cv2.VideoCapture' has no len()

Steps to reproduce:

from dlc2nwb.utils import get_movie_timestamps
get_movie_timestamps('VID_20240117_165651.mp4')

Expected behaviour:

The timestamps are returned

Actual behaviour:

TypeError: object of type 'cv2.VideoCapture' has no len()

Environment info:

OS: Windows 10 x64
Conda version: 23.3.1
Python version: 3.9.0
opencv-python version: 4.7.0.72
dlc2nwb version: 0.3

Additional info:

This error might depend on the opencv-python version, in which case pinning the DLC2NWB package to whichever version added the ability to get the len() of a cv.VideoReader (or the one before they took it away if it's an old feature) is the simplest solution.

Alternatively, the first return value of reader.read() is a boolean indicating whether a frame was successfully reader, so using this to change the for loop to a while loop, like so:

success, _ = reader.read()
while success:
    timestamps.append(reader.get(cv2.CAP_PROP_POS_MSEC))
    success, _ = reader.read()

fixes the issue. However, (again, possibly depending on your opencv-python version) you then run into an AttributeError on line 83, since a cv2.VideoReader has no attribute fps. This can be fixed by replacing reader.fps on that line with reader.get(cv2.CAP_PROP_FPS)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.