Coder Social home page Coder Social logo

peterzs / unif Goto Github PK

View Code? Open in Web Editor NEW

This project forked from shenhanqian/unif

0.0 0.0 0.0 359 KB

[ECCV 2022] The official repo for the paper "UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation".

Python 93.65% Cython 6.35%

unif's Introduction

UNIF

[ECCV-2022] The official repo for the paper "UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation".

project / arxiv / video / poster

teaser

Installation

1. Install PyTorch, CUDA runtime in a conda environment

# Create a new virtual environment with conda
conda create --name UNIF python=3.9

# Install PyTorch along with CUDA runtime
conda install cudatoolkit=11.3 pytorch=1.12.0 -c pytorch

2. Install Pytorch3D

You can either install from prebuilt from binaries

# install with conda
conda install pytorch3d -c pytorch3d

or install from source

# runtime dependencies
conda install -c fvcore -c iopath -c conda-forge fvcore iopath

# build time dependency
conda install -c bottler nvidiacub

# building from source
pip install "git+https://github.com/facebookresearch/pytorch3d.git"

You can refer to the official doc in case that problems occur.

3. Install other dependencies

pip install -r requirements.txt

# We use `mise` from *Occupancy Networks* to speed up mesh generation with adaptive point inferencing. This package can be automatically configured by running
pip install -e .

# To use the offscreen render of `PyRender`, set the environmental variable:
export PYOPENGL_PLATFORM=egl

Data

SMPL

Download SMPL-1.0.0 from the homepage, extract it, and put basicModel_f_lbs_10_207_0_v1.0.0.pkl and basicmodel_m_lbs_10_207_0_v1.0.0.pkl under ./data/smpl/models/.

CAPE

The CAPE dataset can be downloaded from the homepage.

  1. Follow the Option 2: Download by subject section in the Download page. Download per-subject mesh data for 00032, 00096, 00159, and 03223, as only these four subjects have raw scans released.
  2. Follow the Raw Scans section in the Download page, and download per-subject scan data into ./data/cape_release/raw_scans
  3. The dataset does not provide the shape parameters (beta) for each subject. You can download the beta parameters fitted by us from this OneDrive link. You can also refer to this document to fit SMPL parameters within our framework.

ClothSeq

  1. Download the dataset from the Neural-GIF repo. Arrange the files under ./data/ClothSeq/.
  2. The scale of raws scans of the ShrugsPants sequence is 1000 times larger than the other two. Therefore, we scale it down with the following command:
mv data/ClothSeq/ShrugsPants/scans data/ClothSeq/ShrugsPants/scans_old
mkdir data/ClothSeq/ShrugsPants/scans
./tools/clothseq_clean.py data/ClothSeq/ShrugsPants/scans_old data/ClothSeq/ShrugsPants/scans
  1. Since the .obj files are slow to load, we transform them into .ply files by
python tools/obj2ply.py data/ClothSeq/JacketPants/scans data/ClothSeq/JacketPants/scans-ply

Experiments

CAPE (raw scans)

Train and validation (unseen poses)

python main.py --cfg config/cape-scan-subject-cloth_unif.py \
EXP.tag 00032_SS_SCAN_UNIF-20_APS-alpha2beta0_deltaSoftMin200 \
DATASET.kwargs.subject_name 00032 \
DATASET.kwargs.cloth_type shortshort \

To test on extrapolated poses for metrics and partial visual results, add the arguments

...
EXP.test_only True \
EXP.checkpoint <ckpt-path> \

To test on interpolated poses, further add the argument

...
DATASET.kwargs.test_interpolation True \

If you only need the visual results (without metrics and losses), then you can save computation by adding the argument

...
EXP.TEST.external_query True \

To save all results in each batch, add the argument

...
EXP.TEST.save_all_results True \

ClothSeq (raw scans)

python main.py --cfg config/clothseq_frames_unif.py \
EXP.tag JacketShorts_SCAN_UNIF-20_APS-alpha2beta0_deltaSoftMin200 \
DATASET.kwargs.clip_name JacketShorts \

Troubleshooting

  • ImportError: ('Unable to load EGL library', 'EGL: cannot open shared object file: No such file or directory', 'EGL', None)
sudo apt install libosmesa6-dev freeglut3-dev

Cite

@inproceedings{qian2022_unif,
  title={UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation},
  author={Qian, Shenhan and Xu, Jiale and Liu, Ziwei and Ma, Liqian and Gao, Shenghua}
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2022}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.