Coder Social home page Coder Social logo

davis-contest's Introduction

TEST-SET SUBMISSION PERIOD OPENS SOON!

Read more here and try out the evaluation code here.

Densely-Annotated Video Segmentation Contest

Welcome to the Weights & Biases video segmentation contest!

Your goal is to train a neural network model that can select the foreground object from a video clip like the one below:

This task is known as "primary object segmentation" or "video object segmentation" (VOS).

The quality of segmentations will be assessed using the Intersection over Union (IoU) metric.

To mimic the constraints of designing for limited compute, like mobile devices, you're required to keep your network's parameter count below 50 million. Tools for profiling networks built in Keras and PyTorch are included in the contest repository.

Prizes

The prizes will be online retail gift certificates. For winners inside the United States, this will be an Amazon gift card.

  • First prize - $1000 gift certificate
  • Second prize - $500 gift certificate
  • Third prize - $250 gift certificate

How to Participate

The contest is open to Qualcomm employees only.

This competition is split into two phases:

  1. a training phase, where you can train on a public training set and compare your performance to other participants on a public validation set, and
  2. a test phase, where a test dataset without labels will be provided and participants will submit their solutions to be scored on a private leaderboard.

Prizes will be awarded based on performance during the test phase only. Be careful not to over-engineer your model on the training and validation data! In large public competitions and in industrial machine learning, this kind of over-fitting dooms many promising projects.

The test phase will begin at midnight Pacific time on March 29th, 2021. See the Timeline section below.

During the training phase

  • Sign up for W&B using your Qualcomm email. Note: The contest is open to Qualcomm employees only.
  • Check out the Colab notebook for your preferred framework (PyTorch/Lightning, TensorFlow/Keras for some starter code, then build on it with your own custom data pipelines, training schemes, and model architectures. You can develop in Colab or locally (see Installing the contest Package below).
  • Once you're happy with your trained model, produce your formatted results, as described in the Formatting Your Results section below.
  • Evaluate those results using the evaluation notebook. See that notebook for details on how results will be scored.
  • Submit your evaluation run to the public leaderboard.

Submissions are manually reviewed and will be approved within two business days.

Submitting evaluation runs is a great way to ensure your code runs smoothly on data in the format used in the test phase, that your results are properly formatted, and that your submissions are valid, so make sure to do so!

During the testing phase

  • Download the video clips for the test data set (link information TBA).
  • Run your trained model on that data, producing formatted results, just like in the training phase (see Formatting Your Results below).
  • Submit your results run to the private leaderboard (link information TBA).

Getting Help

New to online contests with W&B, deep learning, or video segmentation? No problem! We have posted resources to help you understand the W&B Python library, deep learning frameworks, suitable algorithms, and some articles on neural networks below under the Resources section below.

Questions? Use the #qualcomm-competition slack channel, or email [email protected].

Installing the contest Package

This section provides instructions for installing the contest package from the GitHub repository for this competition.

There are three versions of the package: one that only installs the core tools, for formatting results and managing dataset paths, and two versions that provide extra tools for getting started in two popular deep learning frameworks.

Check out the starter notebooks (PyTorch, Keras) to see how the package is used.

Installing the core tools

The package can be installed with pip, the standard package installer for Python:

pip install "git+https://github.com/wandb/davis-contest.git#egg=contest"

Installing Keras and PyTorch/Lightning tools

To install the contest.keras or contest.torch framework subpackages, provide the name of the framework at the end of the pip install command, using the optional dependencies syntax:

pip install "git+https://github.com/wandb/davis-contest.git#egg=contest[framework]"

where framework is one of keras, torch, or keras,torch.

Formatting Your Results

See the starter notebooks (PyTorch, Keras) for more, including screenshots and code, detailing the construction and formatting of the results.

Results are to be submitted in the form of a Weights & Biases Artifact. W&B's Artifacts system (docs) provides methods for storing, distributing, and version-controlling datasets, models, and other large files. Artifacts are also used to distribute the training, validation, and test datasets for this contest. See this video tutorial and associated Colab notebook for more on how to use Artifacts.

We provide utility functions to produce a results artifact from a directory of model outputs in the repository here.

The best way to check that your results are being formatted correctly is to run the submission notebook, look through the table that it uploads to Weights & Biases, and submit the run for approval.

Format of the results artifact

A results artifact must contain at least the following:

  • a file called paths.json, containing a key "output" whose value is a dictionary ("object" in JSON lingo) with keys that are integer strings and values that are strings defining paths to files,
  • at each path, a PNG file representing the model's outputs for the input frame from the dataset with the same integer index. This PNG file should be greyscale/luminance, with each byte representing an unsigned 8-bit integer (the L mode in PIL), and
  • in the metadata, the key nparams, counting the number of parameters in the model (including all components).

The paths.json file can be generated easily by saving a pandas DataFrame with an integer index and a column called "output" with the .to_json method. See the code in the starter notebooks and repository for examples, including for how to create the W&B Artifact.

The contents of the PNG files will be used to See the evaluation code in the contest package for details on how results will be scored, in particular the functions iou_from_output and binary_iou.

Timeline

  • Feburary 16 - Contest announced, training phase begins, public leaderboard opens
  • March 29, 12:00am Pacific - training phase ends, test phase begins: test set made available for inference, private leaderboard opens
  • March 31, 11:59pm Pacific - test phase ends: private leaderboard closes to new submissions
  • Mid-April - Winners announced
  • Early May - Retrospective webinar

Other Rules

See Contest Terms & Conditions for details, including eligibility requirements and locations.

  • You are free to use any framework you feel comfortable in, but you are responsible for accurately counting parameters.
  • You may only submit results from one account.
  • You can submit as many runs as you like.
  • You can share small snippets of the code online or in our Slack community, but not the full solution -- that means e.g. your GitHub repo should not be public You should keep your Weights & Biases project set to private.
  • You may similarly use snippets of code from online sources, but the majority of your code should be original. Originality of solution will be taken into account when scoring submissions. Submissions with insufficient novelty will be disqualified.

Colab Notebooks

Click the Badges Below to Access the Colab Notebooks

These Google Colab notebooks describe how to get started with the contest and submit results.

Notebook Link
Get Started in PyTorch Open In Colab
Get Started in Keras Open In Colab
Evaluate Your Results Open In Colab
Using Pretrained Networks Open In Colab

Iterating quickly in Colab

Google Colab is a convenient hosted environment you can use to run the baseline and iterate on your models quickly.

To get started:

  • Open the baseline notebook you'd like to work with from the table above.
  • Save a copy in Google Drive for yourself.
  • To ensure the GPU is enabled, click Runtime > Change runtime type. Check that the "hardware accelerator" is set to GPU.
  • Step through each section, pressing play on the code blocks to run the cells.
  • Add your own data engineering and model code.
  • Review the Getting Started section for details on how to submit results to the public leaderboard.

Ask Questions

If you have any questions, please feel free to email us at [email protected] or join our Slack community and post in the channel for this competition: #qualcomm-competition.

Resources

davis-contest's People

Contributors

charlesfrye avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.