This repository contains a set of tools for working with the Common Objects in 3D (CO3D) dataset.
The dataset can be downloaded from the following Facebook AI Research web page: download link
This is a python3 / PyTorch
codebase.
- Install
PyTorch
. - Install
PyTorch3D
. - Install the remaining dependencies in
requirements.txt
:
pip install lpips visdom tqdm
Note that the core data model in dataset/types.py
is independent of PyTorch
and can be imported and used with other machine-learning frameworks.
requirements.txt
lists the following dependencies:
- Install dependencies - See Instalation above.
- Download the dataset here to a given root folder
DATASET_ROOT_FOLDER
. - In
dataset/dataset_zoo.py
set theDATASET_ROOT
variable to your DATASET_ROOT_FOLDER`:dataset_zoo.py:25: DATASET_ROOT = DATASET_ROOT_FOLDER
- Run
eval_demo.py
:Note thatpython eval_demo.py
eval_demo.py
runs an evaluation of a simple depth-based image rendering (DBIR) model on the same data as in the paper. Hence, the results are directly comparable to the numbers reported in the paper.
Unit tests can be executed with:
python -m unittest
The CO3D codebase is released under the BSD License.
The following presentation of the dataset was delivered at the Extreme Vision Workshop at CVPR 2021: