Coder Social home page Coder Social logo

qiaow02 / teach Goto Github PK

View Code? Open in Web Editor NEW

This project forked from athn-nik/teach

0.0 0.0 0.0 7.9 MB

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"

Home Page: https://teach.is.tue.mpg.de

License: Other

Shell 0.08% Python 99.92%

teach's Introduction

TEACH: Temporal Action Compositions for 3D Humans ArXiv PDF Project Page

Nikos Athanasiou · Mathis Petrovich · Michael J. Black · Gül Varol

3DV 2022

Check our upcoming YouTube video for a quick overview and our paper for more details.

Video

Features

This implementation:

  • Instruction on how to prepare the datasets used in the experiments.
  • The training code:
    • For both baselines
    • For TEACH method
  • A simple interacting demo that given some prompts with texts and durations returns back:
    • a npy file containing the vertices of the body generated by TEACH.
    • a video that demonstrates the result.

Updates

To be uploaded:

  • Instructions about the baselines and how to run them.
  • Instructions for sampling and evaluating with the code all of the models.
  • The rendering code for the blender renderings used in the paper.

Getting Started

TEACH has been implemented and tested on Ubuntu 20.04 with python >= 3.9.

Clone the repo:

git clone https://github.com/athn-nik/teach.git

After it do this to install DistillBERT:

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

Install the requirements using virtualenv :

# pip
source scripts/install_pip.sh

You can do something equivalent with conda as well.

Running the Demo

We have prepared a nice demo code to run TEACH on arbitrary videos. First, you need download the required data(i.e our trained model from our website).

Then, running the demo is as simple as:

python interact_teach.py folder=/path/to/experiment output=/path/to/yourfname texts='[text prompt1, text prompt2, text prompt3, <more prompts comma divided>]' durs='[dur1, dur2, dur3, ...]'

Data

Download the data from AMASS website. Then, run this command to extract the amass sequences that are annotated in babel:

python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/default_is_/babel/babel-smplh-30fps-male --model-type smplh --use-betas --gender male

Download the data from TEACH website, after signing in. The data TEACH was trained was a processed version of BABEL. Hence, we provide them directly to your via our website, where you will also find more relevant details. Finally, download the male SMPLH body model from the SMPLX website in pickle format.

The run this script and change your paths accordingly inside it extract the different babel splits from amass:

python scripts/amass_splits_babel.py

Then create a directory named data and put the babel data and the processed amass data in. You should end up with a data folder with the structure like this:

data
|-- amass
|  `-- your-processed-amass-data 
|
|-- babel
|   `-- babel-teach
|       `...
|   `-- babel-smplh-30fps-male 
|       `...
|
|-- smpl_models
|   `-- smplh
|       `--SMPLH_MALE.pkl

Be careful not to push any data! Then you should softlink inside this repo. To softlink your data, do:

ln -s /path/to/data

Training

To start training after activating your environment. Do:

python train.py experiment=baseline logger=none

Explore configs/train.yaml to change some basic things like where you want your output stored, which data you want to choose if you want to do a small experiment on a subset of the data etc. [TODO]: More on this coming soon.

Sampling & Evaluation

Here are some commands if you want to sample from the validaiton set and evaluate on the metrics reported in the paper:

python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8

In general the folder is: folder_our/<project>/<dataname_config>/<experimet>/<run_id> This folder should contain a checkpoints directory with a last.ckpt file inside and a .hydra directory from which the configuration will be pulled and the relevant checkpoint. This folder is created during training in the output directory and is provided in our website for the experiments in the paper.

  • align=trans: chooses if translation will be aligned or if the global orientation also(align=full)
  • slerp_ws: decides on whether slerp is done or not(=null) and what is the size of its window.

Then for the evaluation you should do:

python eval.py folder=/path/to/experiment align=true slerp=true

the two extra parameters decide the samples on which the evaluation will be performed.

Transition distance

  • Without alignment column: python compute_td.py folder=/path/to/experiment align_full_bodies=false align_only_trans=true

  • With alignment column: python compute_td.py folder=/path/to/experiment align_full_bodies=true align_only_trans=false

[TODO]: More on this coming soon.

Citation

@inproceedings{TEACH:3DV:2022,
  title={TEACH: Temporal Action Compositions for 3D Humans},
  author={Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l },
  booktitle = {International Conference on 3D Vision (3DV)},
  month = {September},
  year = {2022}
}

License

This code is available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.

Acknowledgments

We thank Benjamin Pellkofer for his IT support.

References

Many part of this code were based on the official implementation of TEMOS. Here are some great resources we benefit:

Contact

This code repository was implemented mainly by Nikos Athanasiou with the help of Mathis Petrovich.

Give a ⭐ if you like.

For commercial licensing (and all related questions for business applications), please contact [email protected].

teach's People

Contributors

athn-nik avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.