Coder Social home page Coder Social logo

morig's Introduction

This is the code repository implementing the paper "Morig: Motion-aware rigging of character meshes from point clouds".

[2023.02.17] About the ModelsResources dataset: If you are from a research lab and interested in the dataset for non-commercial, research-only purposes, please send a request email to me at [email protected].

Setup

The project is developed on Ubuntu 20.04 with cuda11.3.

conda env create -f environment.yml
conda activate morig

Datasets

Download the processed datasets we used from the following links:

  1. ModelsResources (16.9G)
  2. DeformingThings4D (10.5G)

Testing & Evaluation

Pretrained models

Download our pretrained models from here.

Demo

We provide a demo script that concatenates all the steps, as a reference of the whole pipeline.

[To do]

Evaluate on ModelsResources dataset:

  1. Output shifted points and their attention. Remember to change the dataset and the output folder in each command to your preference.
python -u training/train_rig.py --arch="jointnet_motion" -e --resume ="checkpoints/jointnet_motion/model_best.pth.tar" --train_folder="DATASET_PATH/ModelsResources/train/" --val_folder="DATASET_PATH/ModelsResources/val/" --test_folder="DATASET_PATH/ModelsResources/test/" --output_folder="results/our_results"
python -u training/train_rig.py --arch="masknet_motion" -e --resume ="checkpoints/masknet_motion/model_best.pth.tar" --train_folder="DATASET_PATH/ModelsResources/train/" --val_folder="DATASET_PATH/ModelsResources/val/" --test_folder="DATASET_PATH/ModelsResources/test/" --output_folder="results/our_results"
  1. Extract joints. Change the dataset and the result folders at line 49-55 in evaluate/eval_rigging.py. We have set the optimal hyper-parameters by default.
python -u evaluate/eval_rigging.py
  1. Connect joints to form skeletons using pred_skel_func in evaluate/joint2rig.py. Then, predict skinning weights to form rigs using pred_rig_func in evaluate/joint2rig.py. Remember to change dataset_folder in each function.

  2. Animate characters according to partial point cloud sequences. Remember to set the dataset and the results folders at line 279-282.

python -u evaluate/eval_tracking.py

I put our results here for your reference.

Training

One can run the following steps to train all the networks.

  1. Train a correspondence module with discrete frames on ModelsResources dataset.
python -u training/train_corr_pose.py \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" 
--train_batch=8 --test_batch=8 \
--logdir="logs/corr_p_mr" \ 
--checkpoint="checkpoints/corr_p_mr" \ 
--num_workers=4 --lr=1e-3 \
--vis_branch_start_epoch=100 --schedule 200 \
--epochs=300 --dataset="modelsresource"
  1. (After 1) Train a deformation module with discrete frames on ModelsResources dataset
python -u training/train_deform_pose.py \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" \
--train_batch=6 --test_batch=6 \
--logdir="logs/deform_p_mr" \
--checkpoint="checkpoints/deform_p_mr" \
--init_extractor="checkpoints/corr_p_mr/model_best.pth.tar" \
--num_workers=4 --lr=1e-4 --epochs=150 --schedule 60 120 \
--dataset="modelsresource"
  1. (After 1 and 2) Train a joint prediction module. We provided the predicted deformation (folder "pred_flow") in our preprocessed dataset. You need to output predicted deformation if you use different data:
python -u training/train_rig.py \
--arch="jointnet_motion" \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" \
--train_batch=4 --test_batch=4 \
--logdir="logs/jointnet_motion" \
--checkpoint="checkpoints/jointnet_motion" \
--lr=5e-4 --schedule 40 80 --epochs=120
  1. (After 1 and 2) Similar to 3, train an attention prediction module.
python -u training/train_rig.py \
--arch="masknet_motion" \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" \
--train_batch=4 --test_batch=4 \
--logdir="logs/masknet_motion" \
--checkpoint="checkpoints/masknet_motion" \
--lr=5e-4 --schedule 50 --epochs=100
  1. (After 1 and 2) Similar to 3 and 4, train a skinning prediction module.
python -u training/train_skin.py \
--arch="skinnet_motion" \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" \
--train_batch=4 --test_batch=4 \
--logdir="logs/skin_motion" \
--checkpoint="checkpoints/skin_motion" \
--loss_cont="infonce" \
-epochs=100
  1. To animate the rigged character based on point cloud sequence, we train a correspondence module and a deformation module with sequential frames on ModelsResources dataset. This can be achieved by simply adding "--sequential_frame":
python -u training/train_corr_pose.py \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" 
--train_batch=8 --test_batch=8 \
--logdir="logs/corr_p_mr_seq" \ 
--checkpoint="checkpoints/corr_p_mr_seq" \ 
--num_workers=4 --lr=1e-3 \
--vis_branch_start_epoch=100 --schedule 200 \
--epochs=300 --dataset="modelsresource" --sequential_frame
python -u training/train_deform_pose.py \
--train_folder="DATASET_PATH/ModelsResources/train/" \
--val_folder="DATASET_PATH/ModelsResources/val/" \
--test_folder="DATASET_PATH/ModelsResources/test/" \
--train_batch=6 --test_batch=6 \
--logdir="logs/deform_p_mr_seq" \
--checkpoint="checkpoints/deform_p_mr_seq" \
--init_extractor="checkpoints/corr_p_mr_seq/model_best.pth.tar" \
--num_workers=4 --lr=1e-4 --epochs=150 --schedule 60 120 \
--dataset="modelsresource" --sequential_frame
  1. To better generalize to real motion, we finetune correspondence and deformation modules on DeformingThings4D dataset. This can be achieved by setting "--dataset" to "deformingthings".
python -u training/train_corr_pose.py \
--train_folder="DATASET_PATH/DeformingThings4D/train/" \
--val_folder="DATASET_PATH/DeformingThings4D/val/" \
--test_folder="DATASET_PATH/DeformingThings4D/test/" \
--train_batch=8 --test_batch=8 \
--logdir="logs/corr_p_dt_seq" \
--checkpoint="checkpoints/corr_p_dt_seq" \
--resume="checkpoints/corr_p_mr_seq/model_best.pth.tar"
--num_workers=4 --lr=1e-3 \
--vis_branch_start_epoch=100 --schedule 200 \
--epochs=300 --dataset="deformingthings" --sequential_frame
python -u training/train_deform_pose.py \
--train_folder="DATASET_PATH/DeformingThings4D/train/" \
--val_folder="DATASET_PATH/DeformingThings4D/val/" \
--test_folder="DATASET_PATH/DeformingThings4D/test/" \
--train_batch=6 --test_batch=6 \
--logdir="logs/deform_p_dt_seq" \
--checkpoint="checkpoints/deform_p_dt_seq" \
--init_extractor="checkpoints/corr_p_dt_seq/model_best.pth.tar"
--num_workers=4 --lr=1e-4 --epochs=150 --schedule 60 120 \
--dataset="deformingthings" --sequential_frame
  1. When the shape of target mesh and the captured point cloud are different, we first deform the shape of the mesh to fit the point cloud. This can be achieved by the same correspondence and deformation module architecture trained on data with different shape (train_deform/val_deform/test_deform):
python -u training/train_corr_shape.py \
--train_folder="DATASET_PATH/ModelsResources/train_deform/" \
--val_folder="DATASET_PATH/ModelsResources/val_deform/" \
--test_folder="DATASET_PATH/ModelsResources/test_deform/" 
--train_batch=8 --test_batch=8 \
--logdir="logs/corr_s_mr" \ 
--checkpoint="checkpoints/corr_s_mr" \ 
--num_workers=4 --lr=1e-3 \
--vis_branch_start_epoch=100 --schedule 200 \
--epochs=300 --dataset="modelsresource"
python -u training/train_deform_shape.py \
--train_folder="DATASET_PATH/ModelsResources/train_deform/" \
--val_folder="DATASET_PATH/ModelsResources/val_deform/" \
--test_folder="DATASET_PATH/ModelsResources/test_deform/" \
--train_batch=6 --test_batch=6 \
--logdir="logs/deform_s_mr" \
--checkpoint="checkpoints/deform_s_mr" \
--init_extractor="checkpoints/corr_s_mr/model_best.pth.tar" \
--num_workers=4 --lr=1e-4 --epochs=150 --schedule 60 120 \
--dataset="modelsresource"

morig's People

Contributors

zhan-xu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

morig's Issues

How to evaluate on DeformingThings4D

Can you share a demo script to evaluate on DeformingThings4D dataset?
Or can you explain the changes that need to be made to the commands or files?
Also, can you share the filenames from ModelsResources dataset shown in the main paper

❓ How to use MoRig only for Rigging?

Hi Zhan-Xu and awesome MoRig Team,

Congratulations on this amazing work and very grateful that your team is sharing this with the world.
I am interested in using MoRig for its rigging feature. Is this possible, if yes, can you share how this can be done.

Cheers

Inconsistency in pred_vismask computed from Corrnet

Hi,
I am trying to reproduce results for Morig, but facing issues with deforming the mesh using the point cloud motion.
The deformation (specifically pred_vtx_traj ) I am getting by running evaluate/eval_tracking.py is more or less static and does not correspond to results in our_results/tracking_loss/*.npz file.

Currently I have figured out that the issue is due to pred_vismask values computed by Corrnet. After running line-45 in models/deformnet.py most of the values in pred_vismask are close to 0. Because of this the number of vis_inds is approx, 20-50. But if using pred_vismask present in our_results/tracking_loss/*.npz the number is close to 900-1000.

Even on DeformingThings4D I am facing the same issue.
astra_SambaDancing_1_front
In the figure. left,middle and right show the input mesh, target point cloud and the target mesh configuration. Color on left mesh and middle point cloud show the correspondence whereas the right mesh shows the values of pred_vismask_color using the viridis color map (blue represents 0 and green represents 1)

Is it valid to use MogRig on vertex caches?

Hi,

I was wondering if it was valid to use MogRig with the point cloud being the vertex positions of a vertex cache animation (blendshape compatible) to rerig an existing model for clothing / face bones and like for VRM springbones.

IndexError: list index out of range

Thanks for the great project.

After following the README and building the environment, I ran it and got the following error.

(morig) yonelab@yonePC:~/workspace/MoRig$ python -u training/train_rig.py --arch="jointnet_motion" -e --resume ="checkpoints/jointnet_motion/model_best.pth.tar" --train_folder="home/yonelab/workspace/MoRig/ModelsResources/train/" --val_folder="home/yonelab/workspace/MoRig/ModelsResources/val/" --test_folder="home/yonelab/workspace/MoRig/ModelsResources/test/" --output_folder="results/our_results"
Namespace(aggr_method='attn', arch='jointnet_motion', checkpoint='checkpoints/test', epochs=120, evaluate=True, gamma=0.2, logdir='logs/test', lr=0.0005, motion_dim=32, num_keyframes=5, output_folder='results/our_results', resume='=checkpoints/jointnet_motion/model_best.pth.tar', schedule=[40, 80], start_epoch=0, test_batch=2, test_folder='home/yonelab/workspace/MoRig/ModelsResources/test/', train_batch=2, train_folder='home/yonelab/workspace/MoRig/ModelsResources/train/', val_folder='home/yonelab/workspace/MoRig/ModelsResources/val/', weight_decay=0.0001)
=> no checkpoint found at '=checkpoints/jointnet_motion/model_best.pth.tar'
    Total params: 7.88M
Processing...
0it [00:00, ?it/s]
Traceback (most recent call last):
  File "training/train_rig.py", line 294, in <module>
    main(parser.parse_args())
  File "training/train_rig.py", line 103, in main
    train_loader = DataLoader(RigDataset(root=args.train_folder), batch_size=args.train_batch, shuffle=True, follow_batch=['joints'])
  File "./datasets/dataset_rig.py", line 13, in __init__
    super(RigDataset, self).__init__(root)
  File "/home/yonelab/anaconda3/envs/morig/lib/python3.7/site-packages/torch_geometric/data/in_memory_dataset.py", line 56, in __init__
    super().__init__(root, transform, pre_transform, pre_filter)
  File "/home/yonelab/anaconda3/envs/morig/lib/python3.7/site-packages/torch_geometric/data/dataset.py", line 87, in __init__
    self._process()
  File "/home/yonelab/anaconda3/envs/morig/lib/python3.7/site-packages/torch_geometric/data/dataset.py", line 170, in _process
    self.process()
  File "./datasets/dataset_rig.py", line 139, in process
    data, slices = self.collate(data_list)
  File "/home/yonelab/anaconda3/envs/morig/lib/python3.7/site-packages/torch_geometric/data/in_memory_dataset.py", line 112, in collate
    data_list[0].__class__,
IndexError: list index out of range

And here is my environment
Ubuntu20.04, Python 3.7.13

How can I deal with this?
Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.