Coder Social home page Coder Social logo

somuns-yjx / cameratraps Goto Github PK

View Code? Open in Web Editor NEW

This project forked from microsoft/cameratraps

0.0 0.0 0.0 384.9 MB

To gain access, please finish setting up this repository now at: https://repos.opensource.microsoft.com/microsoft/wizard?existingreponame=CameraTraps&existingrepoid=152634113

Home Page: https://repos.opensource.microsoft.com/microsoft/wizard?existingreponame=CameraTraps&existingrepoid=152634113

License: MIT License

Shell 0.05% Shell 0.40% Python 1.72% Python 22.17% C# 0.70% HTML 0.36% Batchfile 0.01% Jupyter Notebook 37.14% Jupyter Notebook 37.34% Dockerfile 0.02% Starlark 0.09%

cameratraps's Introduction

Announcement

At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and grow. We are expanding the CameraTraps repo to introduce PyTorch Wildlife, a Collaborative Deep Learning Framework for Conservation, where researchers can come together to share and use datasets and deep learning architectures for wildlife conservation.

We've been inspired by the potential and capabilities of Megadetector, and we deeply value its contributions to the community. As we forge ahead with PyTorch Wildlife, please know that we remain committed to supporting and maintaining Megadetector, ensuring its continued relevance and utility.

PyTorch Wildlife: A Collaborative Deep Learning Framework for Conservation

Version 0.0.0 is out!

You can access our current version of PyTorch Wildlife here!.

Core Features

  • Unified Framework: PyTorch Wildlife integrates four pivotal elements:

    • Machine Learning Models
    • Pre-trained Weights
    • Datasets
    • Utilities
  • Our work: In the provided graph, boxes outlined in red represent elements that will be added and remained fixed, while those in blue will be part of our development.

  • Inaugural Model: We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from Megadetector.

  • Expandable Repository: As we move forward, our platform will welcome new models and pre-trained weights. We're excited to host contributions from global researchers through a dedicated submission platform.

  • Datasets from LILA: PyTorch Wildlife will also incorporate the vast datasets hosted on LILA, making it a treasure trove for conservation research.

  • Versatile Utilities: Our set of utilities spans from visualization tools to task-specific utilities, many inherited from the trusted Megadetector.

  • User Interface Flexibility: While we provide a foundational user interface, our platform is designed to inspire. We encourage researchers to craft and share their unique interfaces, and we'll list both existing and new UIs from other collaborators for the community's benefit.

Let's shape the future of wildlife research, together!

Below you can find a list of the core elements of PyTorchWildlife.

Development roadmap

Here you can find the milestone roadmap for PyTorch Wildlife on October!

MegaDetector Overview

This repo contains the tools for training, running, and evaluating detectors and classifiers for images collected from motion-triggered wildlife cameras. The core functionality provided is:

  • Training and running models, particularly MegaDetector, an object detection model that does a pretty good job finding animals, people, and vehicles (and therefore is pretty good at finding empty images) in a variety of terrestrial ecosystems
  • Data parsing from frequently-used camera trap metadata formats into a common format
  • A batch processing API that runs MegaDetector on large image collections, to accelerate population surveys
  • A real-time API that runs MegaDetector (and some species classifiers) synchronously, primarily to support anti-poaching scenarios (e.g. see this blog post describing how this API supports Wildlife Protection Solutions)

This repo is maintained by folks at Ecologize who like looking at pictures of animals. We want to support conservation, of course, but we also really like looking at pictures of animals.

What's MegaDetector all about?

The main model that we train and run using tools in this repo is MegaDetector, an object detection model that identifies animals, people, and vehicles in camera trap images. This model is trained on several hundred thousand bounding boxes from a variety of ecosystems. Lots more information – including download links and instructions for running the model – is available on the MegaDetector page.

Here's a "teaser" image of what detector output looks like:

Red bounding box on fox

Image credit University of Washington.

How do I get started?

If you're just considering the use of AI in your workflow, and you aren't even sure yet whether MegaDetector would be useful to you, we recommend reading the "getting started with MegaDetector" page.

If you're already familiar with MegaDetector and you're ready to run it on your data (and you have some familiarity with running Python code), see the MegaDetector README for instructions on downloading and running MegaDetector.

Who is using MegaDetector?

We work with ecologists all over the world to help them spend less time annotating images and more time thinking about conservation. You can read a little more about how this works on our getting started with MegaDetector page.

Here are a few of the organizations that have used MegaDetector... we're only listing organizations who (a) we know about and (b) have given us permission to refer to them here (or have posted publicly about their use of MegaDetector), so if you're using MegaDetector or other tools from this repo and would like to be added to this list, email us!

Data

This repo does not directly host camera trap data, but we work with our collaborators to make data and annotations available whenever possible on lila.science.

Contact

For questions about this repo, contact [email protected].

Contents

This repo is organized into the following folders...

api

Code for hosting our models as an API, either for synchronous operation (i.e., for real-time inference) or as a batch process (for large biodiversity surveys). Common operations one might do after running MegaDetector – e.g. generating preview pages to summarize your results, separating images into different folders based on AI results, or converting results to a different format – also live in this folder, within the api/batch_processing/postprocessing folder.

classification

Experimental code for training species classifiers on new data sets, generally trained on MegaDetector crops. Currently the main pipeline described in this folder relies on a large database of labeled images that is not publicly available; therefore, this folder is not yet set up to facilitate training of your own classifiers. However, it is useful for users of the classifiers that we train, and contains some useful starting points if you are going to take a "DIY" approach to training classifiers on cropped images.

All that said, here's another "teaser image" of what you get at the end of training and running a classifier:

data_management

Code for:

  • Converting frequently-used metadata formats to COCO Camera Traps format
  • Converting the output of AI models (especially YOLOv5) to the format used for AI results throughout this repo
  • Creating, visualizing, and editing COCO Camera Traps .json databases

detection

Code for training, running, and evaluating MegaDetector.

research

Ongoing research projects that use this repository in one way or another; as of the time I'm editing this README, there are projects in this folder around active learning and the use of simulated environments for training data augmentation.

sandbox

Random things that don't fit in any other directory. For example:

  • A not-super-useful but super-duper-satisfying and mostly-successful attempt to use OCR to pull metadata out of image pixels in a fairly generic way, to handle those pesky cases when image metadata is lost.
  • Experimental postprocessing scripts that were built for a single use case

taxonomy-mapping

Code to facilitate mapping data-set-specific categories (e.g. "lion", which means very different things in Idaho vs. South Africa) to a standard taxonomy.

test-images

A handful of images from LILA that facilitate testing and debugging.

visualization

Shared tools for visualizing images with ground truth and/or predicted annotations.

Gratuitous pretty camera trap picture

Bird flying above water

Image credit USDA, from the NACTI data set.

License

This repository is licensed with the MIT license.

cameratraps's People

Contributors

agentmorris avatar marcel-simon avatar yangsiyu007 avatar amritagupta avatar chrisyeh96 avatar aench2023 avatar aa-hernandez avatar vardhan-duvvuri avatar pflickin avatar arashno avatar kant avatar beerys avatar suhail14298 avatar zhmiao avatar dependabot[bot] avatar microsoft-github-policy-service[bot] avatar jakob-98 avatar bencevans avatar coreyjaskolski avatar louis030195 avatar marcelsimon avatar microsoftopensource avatar ranjanbalappa avatar chunting-linct avatar oksachi avatar brianhogg avatar fedegonzal avatar msftgits avatar nathanielrindlaub avatar persts avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.