Coder Social home page Coder Social logo

neuralhaircut's Introduction

Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction

Paper | Project Page

This repository contains official inference code for Neural Haircut.

This code helps you to create strand-based hairstyle using multi-view images or monocular video.

Getting started

Clone the repository and install requirements:

git clone https://github.sec.samsung.net/v-sklyarova/NeuralHaircut.git
cd NeuralHaircut
conda env create -n neuralhaircut -f neural_haircut.yaml
conda activate neuralhaircut

Initialize submodules of k-diffusion, NeuS, MODNet, CDGNet, npbgpp. Download pretrained weights for CDGNet and MODNet.

git submodule update --init --recursive
cd npbgpp && python setup.py build develop
cd ..

Download the pretrained NeuralHaircut models:

gdown --folder https://drive.google.com/drive/folders/1TCdJ0CKR3Q6LviovndOkJaKm8S1T9F_8

Running

Fitting the FLAME coarse geometry using multiview images

More details could be find in multiview_optimization

Launching the first stage on H3ds dataset or custom monocular dataset:

python run_geometry_reconstruction.py --case CASE --conf ./configs/SCENE_TYPE/neural_strands.yaml --exp_name first_stage_SCENE_TYPE_CASE

where SCENE_TYPE = [h3ds|monocular].

  • If you want to add camera fitting:
python run_geometry_reconstruction.py --case CASE --conf ./configs/SCENE_TYPE/neural_strands_w_camera_fitting.yaml --exp_name first_stage_SCENE_TYPE_CASE --train_cameras

At the end of first stage do the following steps.

  • If you want to continue from checkpoint add flag --is_continue.

  • If you want to obtain mesh in higher resolution add flags --is_continue --mode validate_mesh.

Launching the second stage on H3ds dataset or custom monocular dataset:

python run_strands_optimization.py --case CASE --scene_type SCENE_TYPE --conf ./configs/SCENE_TYPE/neural_strands.yaml  --hair_conf ./configs/hair_strands_textured.yaml --exp_name second_stage_SCENE_TYPE_CASE
  • If during the first stage you also fitted the cameras, then use the following:
python run_strands_optimization.py --case CASE --scene_type SCENE_TYPE --conf ./configs/SCENE_TYPE/neural_strands_w_camera_fitted.yaml  --hair_conf ./configs/hair_strands_textured.yaml --exp_name second_stage_SCENE_TYPE_CASE

Train NeuralHaircut with your custom data

More information can be found in preprocess_custom_data..

You could run the scripts on our monocular scene for convenience.

License

This code and model are available for scientific research purposes as defined in the LICENSE.txt file. By downloading and using the project you agree to the terms in the LICENSE.txt.

Links

This work is based on the great project NeuS. Also we acknowledge additional projects that were essential and speed up the developement.

  • NeuS for geometry reconstruction;

  • npbgpp for rendering of soft rasterized features;

  • k-diffusion for diffusion network;

  • MODNet, CDGNet used to obtain silhouette and hair segmentations;

  • PIXIE used to obtain initialization for shape and pose parameters;

Citation

Cite as below if you find this repository is helpful to your project:

@inproceedings{sklyarova2023neural_haircut,
title = {Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction},
author = {Sklyarova, Vanessa and Chelishev, Jenya and Dogaru, Andreea and Medvedev, Igor and Lempitsky, Victor and Zakharov, Egor},
booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)},
year = {2023}
} 

neuralhaircut's People

Contributors

egorzakharov avatar vanessik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralhaircut's Issues

Error testing the example provided

Hello, I'm trying to test using the provided that, but it returns with the following error

index 66 is out of bounds for axis 0 with size 66

I'm trying to debug but somthing is strange and I cant understadwhat is.
Can someone give me a help? thanks
image

failing to compute hair_mask with CDGnet

I am trying to pre-process custom data, by following your indications. For simplicity, i am using the monocular data provided by you in /monocular/person_0_image. I get an error when executing the command:

python preprocess_custom_data/calc_masks.py --scene_path ./implicit-hair-data/data/SCENE_TYPE/CASE/ --MODNET_ckpt path_to_modnet --CDGNET_ckpt path_to_cdgnet

calc_masks.py", line 159, in main
for key, nkey in zip(state_dict_old.keys(), state_dict.keys()):
RuntimeError: OrderedDict mutated during iteration

I tried to find a workaround by rewriting the lines from 156 to 164 like this (following the discussion here: https://github.com/pytorch/pytorch/issues/40859

current_model_dict = model.state_dict()
loaded_state_dict = torch.load(args.CDGNET_ckpt, map_location='cpu')
new_state_dict={k:v if v.size()==current_model_dict[k].size()  else  current_model_dict[k] for k,v in zip(current_model_dict.keys(), loaded_state_dict.values())}
model.load_state_dict(new_state_dict, strict=False)

and then the images are generated, but they are all black. So what am i missing?
By contrast, mask images via MODnet are generated succesfully.

Example Result

Hi. Thanks for your great work.
Has someone run the example code (https://github.com/SamsungLabs/NeuralHaircut/blob/main/example) and met this problem?
I followed the step to build my environment and I used v100 to run the example code. I ran 100000 steps in stage one and then started stage two. However one of the results in pred_hair_strands of stage two is as followed and I think it's vague:
pred_hair_strands_99500
Does anyone know how to solve this problem?

Thanks.

How does NeuralHaircut overcome the "noisy orientations map" problem in real-world data?

Hi, thanks for your work and the code, the results are very impressive!

I'm new in the realm of hair reconstruction and modeling. I noticed in the Related Work Section the paper says:

Non-uniform lighting and low effective resolution of the real-world data lead to orientation maps having excessive noise levels or lacking details.

How does NeuralHaircut solve this problem?

In my understanding, the neural hair growth field may alleviate this problem to some extent, as it aggregates information across multi-views; thus, the noise may be reduced. In addition, the use of the prior model would also alleviate this problem as it constrains the resulting hair model to be within a valid space.

What about your opinion? Thanks in advance.

Unable to create the enviroment in any of my devices

Hello,

My colleagues and I are having trouble creating the enviroment with conda, we all have the same error using different machines.

In my case, I am using Ubuntu 22.04 --also tried on Windows 10-- with the latest conda version and 30 series NVIDIA GPUs. In all of these cases I get the same error from the console: conda is unable to create the enviroment due to an extraordinary amount of version errors, most of them related to Pytorch and CUDA.

I attached a txt containing the error trace from the console :
error.txt

As I said, I'm not the only one having this problem, my colleagues are experimenting just the same, with very similar --if not equal-- error traces, and some of them are using PIP.

Right now I'm trying these steps, but I don't think that'll solve anything:

  1. Install Pytorch and Pytorch 3D versions that match the dependencies one.
  2. Check for all imports on code files and take out the version number of these packages on yaml.
  3. Take out prequisites of anything that is not directly use on code.

No luck for now.

Afro hairs, anyone ? xD

I saw the videos, impressive but it's going to fail with black afro, small curled hairs, right ? ^^
Can we see that or you don't consider it at all

second stage loss

hi, can someone help me to solve this error?
i try to train the second stage using the data you provide(because the memory limitation, i use 40 images ), and the loss isn't decreasing.

image

image

PIXIE initialization

Thanks for your great work.
I am reproducing your work with h3ds dataset. But I am facing one problem in multi-view optimization (https://github.com/SamsungLabs/NeuralHaircut/tree/main/src/multiview_optimization) : it says that we need an initialization with PIXIE for shape, pose parameters and save it as a dict in initialization_pixie file.

When I go to PIXIE repository, I am wondering which single image can be used for the initialization or all the multi-view images have to be used. Also, how to get initialization_pixie file is unclear. Thank you!

Best,
Mike

Can use own dataset?

Hi,
I have a question about can I use my own 2D single image (Front image) ?

Thank you.

numpy.core._exceptions._ArrayMemoryError: Unable to allocate

Hi, can someone help me to solve this error?

I run this command
python run_geometry_reconstruction.py --case person_0 --conf ./configs/example_config/neural_strands-monocular.yaml --exp_name first_stage_person_0

Error

Hello Wooden
False
upload transform {'translation': array([0.37335169, 2.34675772, 2.03221262]), 'scale': 2.368383848250081} ./implicit-hair-data/data/monocular/person_0/scale.pickle
Number of views: 66
Traceback (most recent call last):
  File "C:\Users\ADMIN\Desktop\NeuralHaircut\run_geometry_reconstruction.py", line 842, in <module>
    runner = Runner(args.conf, args.mode, args.case, args.is_continue, checkpoint_name=args.checkpoint, exp_name=args.exp_name,  train_cameras=args.train_cameras)
  File "C:\Users\ADMIN\Desktop\NeuralHaircut\run_geometry_reconstruction.py", line 72, in __init__
    self.dataset = MonocularDataset(self.conf['dataset'])
  File "C:\Users\ADMIN\Desktop\NeuralHaircut\src\models\dataset.py", line 361, in __init__
    self.orientations_np = np.stack([cv.imread(im_name) for im_name in self.orientations_lis]) / float(self.num_bins) * math.pi
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.88 GiB for an array with shape (66, 2160, 2160, 3) and data type float64

Add some about section to elaborate the aim of this site....

The "About" section in a website is a crucial component that serves multiple purposes:

  1. Introduction: It provides a clear understanding of the website's purpose and mission.

  2. Trust: It establishes credibility and builds trust with visitors.

  3. Team/Individual: Introduces the team or individual behind the website, creating a personal connection.

  4. Achievements: Showcases the website's accomplishments and milestones.

  5. Contact Information: Includes contact details for communication.

  6. Products/Services: Briefly explains the products or services offered.

  7. Website's History: Tells the story of the website's journey and development.

  8. Uniqueness: Highlights what makes the website stand out from competitors.

  9. SEO: Supports search engine optimization through relevant keywords.

  10. User Engagement: Encourages visitors to explore more and engage with the website.

In summary, the "About" section plays a vital role in presenting the website's identity, purpose, and achievements, fostering trust and connection with the audience while encouraging further engagement.

Need support section for further updates...

Adding a contact section to the footer page of your GitHub repository can be incredibly useful for several reasons:

  1. Easy Communication: It provides a straightforward way for users, contributors, or visitors to get in touch with you. Whether they have questions, feedback, bug reports, or collaboration opportunities, having a contact section simplifies the communication process.

  2. Feedback Collection: A contact section allows users to share their thoughts, suggestions, or concerns about your project. This feedback can be invaluable for improving your repository, fixing issues, or understanding what users need.

  3. Collaboration Opportunities: By providing contact information, you make it easier for potential collaborators or contributors to reach out and express their interest in collaborating with you on your project. This can lead to new ideas, enhancements, and contributions to your repository.

  4. Professionalism: Having a contact section adds a professional touch to your GitHub repository. It shows that you are open to communication and engagement with the community, which can positively impact your project's reputation.

  5. Community Building: Effective communication fosters a sense of community around your project. When users know that their input is valued and that they can easily reach you, they are more likely to engage with your repository and become active members of your community.

  6. Issue Reporting: Users can use the contact section to report any issues they encounter with your project. This helps you identify and address problems more efficiently.

  7. Project Support: If your project is being used by others, having a contact section allows them to seek support or assistance when needed. This can lead to a more positive user experience and increased adoption of your project.

  8. Networking: Through the contact section, you can connect with like-minded developers, potential collaborators, or even job opportunities. It opens up networking possibilities that might not have been available otherwise.

Overall, a contact section serves as a bridge between you and your project's community, promoting engagement, collaboration, and a positive user experience. It shows that you are committed to maintaining and improving your project, and it encourages a healthy, interactive environment around your GitHub repository.

Experiment folder cannot be crated on Windows

The experiment folder cannot be created when launching
python run_geometry_reconstruction.py --case person_0 --conf ./configs/example_config/neural_strands-monocular.yaml --exp_name first_stage_person_0
due to colons in the file path.
Error:
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'exps_first_stage\\first_stage_person_0\\person_0\\neural_strands-monocular\\2023-07-25_20:08:00'

I went around that by adding the line in the run_geometry_reconstruction.py
time = time.replace(':','_')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.