Coder Social home page Coder Social logo

hood's Introduction

HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics

Project Paper

This is a repository with training and inference code for the paper "HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics" (CVPR2023).

Latest update: 30.09.2023, added notebook and config for running inference with any mesh sequence or SMPL pose sequence from a garment mesh in arbitrary pose

Installation

Install conda enviroment

We provide a conda environment file hood.yml to install all the dependencies. You can create and activate the environment with the following commands:

conda env create -f hood.yml
conda activate hood

If you want to build the environment from scratch, here are the necessary commands:

Build enviroment from scratch
# Create and activate a new environment
conda create -n hood python=3.9 -y
conda activate hood

# install pytorch (see https://pytorch.org/)
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia -y

# install pytorch_geometric (see https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html)
conda install pyg -c pyg -y

# install pytorch3d (see https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md)
conda install -c fvcore -c iopath -c conda-forge fvcore iopath -y
conda install -c bottler nvidiacub -y
conda install pytorch3d -c pytorch3d -y


# install auxiliary packages with conda
conda install -c conda-forge munch pandas tqdm omegaconf matplotlib einops ffmpeg -y

# install more auxiliary packages with pip
pip install smplx aitviewer chumpy huepy

# create a new kernel for jupyter notebook
conda install ipykernel -y; python -m ipykernel install --user --name hood --display-name "hood"

Download data

HOOD data

Download the auxiliary data for HOOD using this link. Unpack it anywhere you want and set the HOOD_DATA environmental variable to the path of the unpacked folder. Also, set the HOOD_PROJECT environmental variable to the path you cloned this repository to:

export HOOD_DATA=/path/to/hood_data
export HOOD_PROJECT=/path/to/this/repository

SMPL models

Download the SMPL models using this link. Unpack them into the $HOOD_DATA/aux_data/smpl folder.

In the end your $HOOD_DATA folder should look like this:

$HOOD_DATA
    |-- aux_data
        |-- datasplits // directory with csv data splits used for training the model
        |-- smpl // directory with smpl models
            |-- SMPL_NEUTRAL.pkl
            |-- SMPL_FEMALE.pkl
            |-- SMPL_MALE.pkl
        |-- garment_meshes // folder with .obj meshes for garments used in HOOD
        |-- garments_dict.pkl // dictionary with garmentmeshes and their auxilliary data used for training and inference
        |-- smpl_aux.pkl // dictionary with indices of SMPL vertices that correspond to hands, used to disable hands during inference to avoid body self-intersections
    |-- trained_models // directory with trained HOOD models
        |-- cvpr_submission.pth // model used in the CVPR paper
        |-- postcvpr.pth // model trained with refactored code with several bug fixes after the CVPR submission
        |-- fine15.pth // baseline model without denoted as "Fine15" in the paper (15 message-passing steps, no long-range edges)
        |-- fine48.pth // baseline model without denoted as "Fine48" in the paper (48 message-passing steps, no long-range edges)

Inference

The jupyter notebook Inference.ipynb contains an example of how to run inference of a trained HOOD model given a garment and a pose sequence.

It also has examples of such use-cases as adding a new garment from an .obj file and converting sequences from AMASS and VTO datasets to the format used in HOOD.

To run inference starting from arbitrary garment pose and arbitrary mesh sequence refer to the InferenceFromMeshSequence.ipynb notebook.

Training

To train a new HOOD model from scratch, you need to first download the VTO dataset and convert it to our format.

You can find the instructions on how to do that and the commands used to start the training in the Training.ipynb notebook.

Validation Sequences

You can download the sequences used for validation (Table 1 in the main paper and Tables 1 and 2 in the Supplementary) using this link

You can find instructions on how to generate validation sequences and compute metrics over them in the ValidationSequences.ipynb notebook.

Repository structure

See the RepoIntro.md for more details on the repository structure.

Citation

If you use this repository in your paper, please cite:

      @inproceedings{grigorev2022hood,
      author = {Grigorev, Artur and Thomaszewski, Bernhard and Black, Michael J. and Hilliges, Otmar}, 
      title = {{HOOD}: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics}, 
      journal = {Computer Vision and Pattern Recognition (CVPR)},
      year = {2023},
      }

hood's People

Contributors

dolorousrtur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

hood's Issues

Shoulders position

Hello!
Thank you for this awesome work on the project

However, I have a small question if u can help me out
I'm inferring hood over a specific sequence of body poses and it is in T pose (this is what the .obj file looks like after extracting it from the .pkl input file used)
image
When I run hood on this avatar, on the tshirt garment for example, the shoulders change as can be seen in the below image

image (2)

why does this happen and how can i fix it? Thank youu!

Which SMPL models to donwload?

Hi,
The README file links to the SMPLX download page, although the code (and paper) reference the SMPL model.
Either way, each project has many models, could you be so kind to provide a direct download link or more detailed instructions?

thank you :)

Can inference be done without SMPL?

Thanks so much for making this repo available, the HOOD paper was very impressive and I was interested in giving it a try. Looking into the inference example, it looks like the input sequence requires body poses as joint angles plus SMPL parameters. I was hoping to try the pre-trained models on characters and garments that are not SMPL based, is that possible? From what I recall in the paper, the model is mostly focused on the garment and the body is only relevant to the extent that the "body edges" are included which connect points of potential contact. I understand why in training SMPL is necessary, but for inference, is it required once the model is trained? Would it be possible to have an inference front-end that takes sequences of body mesh points and a garment mesh where only the body edges and pinned vertices are specified inst of joint angles and SMPL params? I realize that extrapolation to body shapes outside the training set could be problematic, but it would be interesting to see how far beyond it can be pushed.

timing of training the model

Hi!
Thanks for your excellent project!
When I was training the cvpr model using provided samples, the training speed seemed to be very slow (the prbar copied from the console: cvpr: 1%|▊ | 5600/732445 [26:07<105:06:36, 1.92it/s], about 3400mins / epoch on a RTX A6000 GPU... for the postcvpr, it gets more slower speed)

How could I achieve your mentioned experiment details(We trained our final model for 150000 training iterations which took around 26 hours on NVIDIA Quadro RTX 6000 GPU)? Do you have any idea on how to improve the speed?

Any help would be appreciated. Many thx!!!

Obstacle positions used in the friction loss

Hi,
Thanks for your amazing work. I'm confused why in the friction loss, you do knn between

  1. Line 60 example['obstacle'].prev_pos and example['cloth'].pos
  2. Line 61 example['obstacle'].pos and example['cloth'].pred_pos

As far as I understand, example['cloth'].pos corresponds to example['obstacle'].pos and example['cloth'].pred_pos corresponds to example['obstacle'].target_pos. For example, in the collision loss, you do knn between example['cloth'].pos corresponds to example['obstacle'].pos.

Is there anything I misunderstood? Thank you very much.

Garment Slides/Stretches Towards LHS of Character

I have been testing out the PostCVPR model on some of the supplied garments with alternative body meshes (I am unable to use SMPL) just in static T-poses and noticed some slightly strange behaviour: the garment seems to have a tendency to move towards the left-hand-side (rightwards when viewed from the front) of the character.
See garment at frame 0:
front-frame-0
back-frame-0
...versus at frame 97:
front-frame-97
back-frame-97
I have tested on couple body meshes (both the "X-Bot" and "Mannequin" from https://www.mixamo.com/#/?page=1&type=Character) and a couple garments (provided T-shirt and dress) and observed similar behaviour.

Is this expected behaviour/have you noticed this happening with SMPL as well? Should I try pinning vertices to avoid this? The changes I made to the code to get non-SMPL meshes working were admittedly pretty hacky so I guess I might have accidentally introduced a bug with those, though I'm not sure what I could have done to make the garment drift like this.

Errors while rendering a video with inference.ipynb

Hi,
Thanks for sharing this amazing work! I followed all the instructions and tried to reproduce the example in Inference.ipynb. However, I got some error while writing the result's video with both rendering options. I am wondering if anyone knows how to resolve the issues.

  1. Using aitviewer:
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[3], line 5
      2 from aitviewer.headless import HeadlessRenderer
      4 # Careful!: creating more that one renderer in a single session causes an error
----> 5 renderer = HeadlessRenderer()

File [~/.conda/envs/hood/lib/python3.10/site-packages/aitviewer/headless.py:36](https://vscode-remote+ssh-002dremote-002bpsddw-002dml-002dlinux01-002eusrd-002escea-002ecom.vscode-resource.vscode-cdn.net/home/XXX/HOOD/~/.conda/envs/hood/lib/python3.10/site-packages/aitviewer/headless.py:36), in HeadlessRenderer.__init__(self, **kwargs)
     30 def __init__(self, **kwargs):
     31     """
     32     Initializer.
     33     :param frame_dir: Where to save the frames to.
     34     :param kwargs: kwargs.
     35     """
---> 36     super().__init__(**kwargs)

File [~/.conda/envs/hood/lib/python3.10/site-packages/aitviewer/viewer.py:139](https://vscode-remote+ssh-002dremote-002bpsddw-002dml-002dlinux01-002eusrd-002escea-002ecom.vscode-resource.vscode-cdn.net/home/hlu/HOOD/~/.conda/envs/hood/lib/python3.10/site-packages/aitviewer/viewer.py:139), in Viewer.__init__(self, title, size, samples, **kwargs)
    136 # Calculate window size
    137 size = int(size[0] * self.size_mult), int(size[1] * self.size_mult)
--> 139 self.window = base_window_cls(
    140     title=title,
    141     size=size,
    142     fullscreen=C.fullscreen,
    143     resizable=C.resizable,
    144     gl_version=self.gl_version,
...
     88 _apply_env_var(kwargs, 'libx11', 'GLCONTEXT_LINUX_LIBX11')
     89 kwargs = _strip_kwargs(kwargs, ['glversion', 'mode', 'libgl', 'libx11'])
---> 90 return x11.create_context(**kwargs)

Exception: (standalone) XOpenDisplay: cannot open display
  1. Using python utils/show.py rollout_path=PATH_TO_SEQUENCE:
QObject::moveToThread: Current thread (0x81587d0) is not the object's thread (0x86795b0).
Cannot move to target thread (0x81587d0)

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/XXX/.conda/envs/hood/lib/python3.10/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

How to a custom cloth for test?

Hi,
the demo video is amazing!
Now I want to test some custom cloth not in garment_meshes.
could you provide some tips how to build a new garment mesh like in garments_dict.pkl ?
Thanks!

Adding multiple pieces of clothes

Hi,
first of all, thank you for the great work.

I want to ask if there is any way to add multiple items of clothing like pants and t-shirt to one avatar?
My idea was to combine the t-shirt with the pants to make it one object. Do you think there is a better way or do you have any useful advice how to do this?

Thank you in advance for answer.

AttributeError: 'Inspector' object has no attribute 'inspect'

Hello all

While trying to run the model using the provided notebook, I faced this error in the "create validation config and create Runner object" cell:

Screenshot from 2024-03-09 12-21-11

And it also gives me the same AttributeError related to the keys attribute in addition to the inspect attribute used here:

Screenshot from 2024-03-09 12-22-52

Has anyone faced this issue and was able to fix it? I'd really appreciate your help.
Thank you.

error

When I run the VTO dataset from Inference.ipynb to HOOD.pkl with the code “convert_vto_to_pkl(vto_sequence_path, target_pkl_path, n_zeropose_interpolation_steps=30)” it reports the error
Cell In[19], line 1.

KeyError backtracking (most recent call)
Cell In[19], line 1.
----> 1 convert_vtoo_to_pkl(vtoo_sequence_path, target_pkl_path, n_zeropose_interpolation_steps=30)
2 print(f'Pose sequence saved into {target_pkl_path}')

file ~/projects/mjz/HOOD/utils/data_making.py:207, in convert_vto_to_pkl(vto_seq_path, out_path, start, n_frames, n_inter_steps, n_zeropose_ interpolation_steps)
204 vto_dict = pickle_load(vto_seq_path)
206 out_dict = dict()
--> 207 out_dict['translation'] = vto_dict['translation'][start:]
208 out_dict['body_pose'] = vto_dict['pose'][start:, 3:72]
209 out_dict['global_orient'] = voto_dict['pose'][start:,:3] 202

KeyError: 'translation

VTO dataset don't have the samples in train.csv

image
you can see the pictures that the vto only have the "shape numbers" up to "07",
but the id in train.csv will index up to shape14 maybe shape15.
did i miss something?
maybe lake the dataset?
PLZ, i need help.. Orz... >_<

Cloth position precomputed by LBS

Hi! Your work is super great!

When I read the code, I noticed that the position of the cloth is precomputed for the inference by LBS. If I don't set the pinned vertices, is it possible not use the precomputed position? Because I am trying to change the obstacle during one motion sequence, the shape of the obstacle might change, so as the blend weights. Precompute the position might be challenging.

Looking forward to your help! Thank you!

A-pose garments?

Hi, thanks for the work! I am wondering if your work can be used for other datasets, e.g. BEDLAM, where the rest poses of all clothes are in A-pose. I tried to modify smplx_v_rest_pose when adding garments into dict but without luck (but it seems that the upper part looks alright): Anything else I need to change? Thanks a lot!

Screenshot 2023-08-25 at 03 46 29

Clothing and model first frame mismatch

When I ran it from Inference_from_any_pose.ipynb, I got this:
1723124613326
See from any pose.yaml:
dataloader:
num_workers: 0
batch_size: 1
pyg_data: True
dataset:
from_any_pose:
smpl_model: 'smpl/SMPL_FEMALE.pkl'
pose_sequence_type: "smpl"
pose_sequence_path: 'fromanypose/pose_sequence.pkl'

pose_sequence_type: "mesh"

pose_sequence_path: 'fromanypose/mesh_sequence.pkl'

  garment_template_path: 'fromanypose/tshirt.pkl'
  n_coarse_levels: 3

What do I do to make the garment match the model?

How to obtain a 3D dynamic model

Hi, this is a great paper. I would like to know how to obtain a 3D dynamic sequence of paper results, such as a file in FBX format, through your work

Help! ValueError: too many values to unpack (expected 3)

Hi, I got this error when I run my custom cloth with this command :

sequence = next(iter(dataloader))
sequence = move2device(sequence, 'cuda:0')
trajectories_dict = runner.valid_rollout(sequence,  bare=True)

Error

File ~/HOOD/utils/cloth_and_material.py:281, in VertexNormalsPYG.forward(self, pyg_data, node_key, pos_key)
    279 f = pyg_data[node_key].faces_batch.T  # F x 3
    280 triangles = gather(v, f, 0, 1, 1)  # F x 3 x 3
--> 281 v0, v1, v2 = torch.unbind(triangles, dim=-2)  # F x 3
    282 e0 = v1 - v0  # F x 3
    283 e1 = v2 - v1

ValueError: too many values to unpack (expected 3)

Texture of garments

First, thank you for providing an amazing garment model with remarkable performances.

In your paper, I saw that HOOD can reflect real-world clothes using 3D scan data (Figure 9)

Can I know how can I apply the texture to the garment mesh?

Thank you!

git clone fail in Windows

Hello Thank you for sharing your research work!

I tried to clone this repository but I got the errors due to the directory name 'aux'. So I can't fully clone the repository.
It seems that the Windows platform reserves the name 'aux' and does not allow users to use it.

I will try with another temporary way to clone this like using WSL to clone and change the folder name.

I just want you to know this problem on the Windows.

Thanks.

PS: after cloning with WSL, I found that the libraries that this repository uses can be currently built only on the Linux platform. So, people who wants to build this repository should start with the linux platform.

torch_collisions

in dev branch
ModuleNotFoundError: No module named 'torch_collisions'
could you provide the code?

How to reproduce Fig.10(B)?

@Dolorousrtur
Thank you for sharing this great work with us!

My question is : in order to reproduce Fig.10(B), i.e. different materials for different parts of a garment, do we need to retrain the model ?
If we can reproduce Fig.10(B) with the pretrained cvpr model, what exactly is the required set-up?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.