This is the offical Pytorch implementation of our paper:
Below is the learned virtual markers and the overall framework.
- Provide inference code
- Clone this codebase as ${Project}.
- Install dependences. This project is developed using >= python 3.8 on Ubuntu 16.04. NVIDIA GPUs are needed. We recommend you to use an Anaconda virtual environment.
# 1. Create a conda virtual environment.
conda create -n pytorch python=3.8 -y
conda activate pytorch
# 2. Install PyTorch >= v1.6.0 following [official instruction](https://pytorch.org/). Please adapt the cuda version to yours.
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
# 3. Install other packages. This project doesn't have any special or difficult-to-install dependencies.
sh requirements.sh
-
Prepare SMPL layer. We use smplx.
- Install
smplx
package bypip install smplx
. - Download
basicModel_f_lbs_10_207_0_v1.0.0.pkl
,basicModel_m_lbs_10_207_0_v1.0.0.pkl
, andbasicModel_neutral_lbs_10_207_0_v1.0.0.pkl
from here (female & male) and here (neutral) to${Project}/data/smpl
. Please rename them asSMPL_FEMALE.pkl
,SMPL_MALE.pkl
, andSMPL_NEUTRAL.pkl
, respectively. - Download others SMPL-related from here and put them to
${Project}/data/smpl
.
- Install
-
Download data following the Data section. In summary, your directory tree should be like this
${Project}
├── assets
├── command
├── configs
├── data
├── demo
├── experiment
├── inputs
├── lib
├── main
├── models
├── README.md
`── requirements.sh
assets
contains the body virtual markers innpz
format. Feel free to use them.command
contains the running scripts.configs
contains the configurations inyml
format.data
contains soft links to images and annotations directories.lib
contains kernel codes for our method.main
contains high-level codes for training or testing the network.models
contains pre-trained weights. Download from here.- *
experiment
will be automatically made after running the code, it contains the outputs, including trained model weights, test metrics and visualized outputs.
The data
directory structure should follow the below hierarchy. Please download the images from the official sites. Download all the processed annotation files from here.
${Project}
|-- data
|-- 3DHP
| |-- annotations
| `-- images
|-- COCO
| |-- annotations
| `-- images
|-- Human36M
| |-- annotations
| `-- images
|-- PW3D
| |-- annotations
| `-- images
|-- SURREAL
| |-- annotations
| `-- images
|-- Up_3D
| |-- annotations
| `-- images
`-- smpl
|-- smpl_indices.pkl
|-- SMPL_FEMALE.pkl
|-- SMPL_MALE.pkl
|-- SMPL_NEUTRAL.pkl
|-- mesh_downsampling.npz
|-- J_regressor_extra.npy
`-- J_regressor_h36m_correct.npy
Every experiment is defined by config
files. Configs of the experiments in the paper can be found in the ./configs
directory. You can use the scripts under command
to run.
To train the model, simply run the script below. Specific configurations can be modified in the corresponding configs/simple3dmesh_train/baseline.yml
file. Default setting is using 4 GPUs (16G V100). Multi-GPU training is implemented with PyTorch's DataParallel. Results can be seen in experiment
directory or in the tensorboard.
We conduct mix-training on H3.6M and 3DPW datasets. To get the reported results on 3DPW dataset, please first run train_h36m.sh
and then load the final weight to train on 3DPW by running train_pw3d.sh
. We train a seperate model on SURREAL dataset using train_surreal.sh
.
sh command/simple3dmesh_train/train_h36m.sh
sh command/simple3dmesh_train/train_pw3d.sh
sh command/simple3dmesh_train/train_surreal.sh
To evaluate the model, specify the model path test.weight_path
in configs/simple3dmesh_test/baseline_*.yml
. Argument --mode test
should be set. Results can be seen in experiment
directory or in the tensorboard.
sh command/simple3dmesh_test/test_h36m.sh
sh command/simple3dmesh_test/test_pw3d.sh
sh command/simple3dmesh_test/test_surreal.sh
Test set | MPVE | MPJPE | PA-MPJPE | Download | Config |
---|---|---|---|---|---|
Human3.6M | 58.0 | 47.3 | 32.0 | model | cfg |
3DPW | 77.9 | 67.5 | 41.3 | model | cfg |
SURREAL | 44.7 | 36.9 | 28.9 | model | cfg |
in-the-wild* | model |
* We further train a model for better inference performance on in-the-wild scenes by finetuning the 3DPW model on SURREAL dataset.
Cite as below if you find this repository is helpful to your project:
@article{ma20233d,
title={3D Human Mesh Estimation from Virtual Markers},
author={Ma, Xiaoxuan and Su, Jiajun and Wang, Chunyu and Zhu, Wentao and Wang, Yizhou},
journal={arXiv preprint arXiv:2303.11726},
year={2023}
}
This repo is built on the excellent work GraphCMR, SPIN, Pose2Mesh, HybrIK and CLIFF. Thanks for these great projects.