Coder Social home page Coder Social logo

3dgs-avatar-release's Introduction

3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting

This repository contains the implementation of our paper 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting.

You can find detailed usage instructions for using pretrained models and training your own models below.

If you find our code useful, please cite:

@article{qian20233dgsavatar,
   title={3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting}, 
   author={Zhiyin Qian and Shaofei Wang and Marko Mihajlovic and Andreas Geiger and Siyu Tang},
   journal={arXiv preprint arXiv:2312.09228},
   year={2023},
}

Installation

Environment Setup

This repository has been tested on the following platform:

  1. Python 3.7.13, PyTorch 1.12.1 with CUDA 11.6 and cuDNN 8.3.2, Ubuntu 22.04/CentOS 7.9.2009

To clone the repo, run either:

git clone --recursive https://github.com/mikeqzy/3dgs-avatar-release.git

or

git clone https://github.com/mikeqzy/3dgs-avatar-release.git
cd 3dgs-avatar-release
git submodule update --init --recursive

Next, you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called 3dgs-avatar using

conda env create -f environment.yml
conda activate 3dgs-avatar
# install tinycudann
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

SMPL Setup

Download SMPL v1.0 for Python 2.7 from SMPL website (for male and female models), and SMPLIFY_CODE_V2.ZIP from SMPLify website (for the neutral model). After downloading, inside SMPL_python_v.1.0.0.zip, male and female models are smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl and smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl, respectively. Inside mpips_smplify_public_v2.zip, the neutral model is smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl. Remove the chumpy objects in these .pkl models using this code under a Python 2 environment (you can create such an environment with conda). Finally, rename the newly generated .pkl files and copy them to subdirectories under ./body_models/smpl/. Eventually, the ./body_models folder should have the following structure:

body_models
 └-- smpl
    ├-- male
    |   └-- model.pkl
    ├-- female
    |   └-- model.pkl
    └-- neutral
        └-- model.pkl

Then, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be saved into ./body_models/misc/.

Dataset preparation

Due to license issues, we cannot publicly distribute our preprocessed ZJU-MoCap and PeopleSnapshot data. Please follow the instructions of ARAH to download and preprocess the datasets. For PeopleSnapshot, we use the optimized SMPL parameters from Anim-NeRF here.

Results on ZJU-MoCap

For easy comparison to our approach, we also store all our pretrained models and renderings on the ZJU-MoCap dataset here.

Training

To train new networks from scratch, run

# ZJU-MoCap
python train.py dataset=zjumocap_377_mono
# PeopleSnapshot
python train.py dataset=ps_female_3 option=iter30k pose_correction=none 

To train on a different subject, simply choose from the configs in configs/dataset/.

We use wandb for online logging, which is free of charge but needs online registration.

Evaluation

To evaluate the method for a specified subject, run

# ZJU-MoCap
python render.py mode=test dataset.test_mode=view dataset=zjumocap_377_mono
# PeopleSnapshot
python render.py mode=test dataset.test_mode=pose pose_correction=none dataset=ps_female_3

Test on out-of-distribution poses

First, please download the preprocessed AIST++ and AMASS sequence for subjects in ZJU-MoCap here and extract under the corresponding subject folder ${ZJU_ROOT}/CoreView_${SUBJECT}.

To animate the subject under out-of-distribution poses, run

python render.py mode=predict dataset.predict_seq=0 dataset=zjumocap_377_mono

We provide four preprocessed sequences for each subject of ZJU-MoCap, which can be specified by setting dataset.predict_seq to 0,1,2,3, where dataset.predict_seq=3 corresponds to the canonical rendering.

Currently, the code only supports animating ZJU-MoCap models for out-of-distribution models.

License

We employ MIT License for the 3DGS-Avatar code, which covers

configs
dataset
models
utils/dataset_utils.py
extract_smpl_parameters.py
render.py
train.py

The rest of the code are modified from 3DGS. Please consult their license and cite them.

Acknowledgement

This project is built on source codes from 3DGS. We also use the data preprocessing script and part of the network implementations from ARAH. We sincerely thank these authors for their awesome work.

3dgs-avatar-release's People

Contributors

mikeqzy avatar

Stargazers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.