Coder Social home page Coder Social logo

cgtuebingen / nerd-neural-reflectance-decomposition Goto Github PK

View Code? Open in Web Editor NEW
243.0 243.0 23.0 293 KB

NeRD: Neural Reflectance Decomposition from Image Collections - ICCV 2021

Home Page: https://markboss.me/publication/2021-nerd/

License: MIT License

Python 100.00%

nerd-neural-reflectance-decomposition's People

Contributors

martinarroyo avatar mazy1998 avatar vork avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nerd-neural-reflectance-decomposition's Issues

Thanks!

How do I run your program?
python nerd.py --datadir [DIR_TO_DATASET_FOLDER] --basedir [TRAIN_DIR] --expname [EXPERIMENT_NAME] --gpu [COMMA_SEPARATED_GPU_LIST]

Can you be a little more specific?

Generating new SGs

Hey, thank you for the awesome work.

I want to test relighting with custom scenes, how can I convert an exr environment map into SGs?
I've tried the fitting script from PhySG. It looks like their SGs is not the same format as yours.

Some questions about illumination

Hi, thanks for this wonderful job. I have some questions running on my own data. I have just a masked video, and used the mean_sgs.npy you provided as illuminations. After training the network, i tried to render a bullet-time video, but when i changed the view directions, the renderd image are wrong. others like surface normal and brdf seems still right. Is this error caused by the mean_sgs.npy? I'm not familiar with this. It would be great help if you help me with it!

Code for mesh extraction

Hello
Thanks for your awesome work
I wonder If you have any plans to open code for mesh extraction (textured mesh, following 4 steps in your supplementary paper)

Do you have any plans to open your code?

Hi, thank you for your great works.
However, you said that you released the code in your paper, but I can't find the available code.
Do you have any plans to open your code?

Best regards.

Code release

Very interesting method for decomposition. Any update on the code release?

Blank Results

I am trying to follow and reproduce your work but meet some problem. I tried the script in README but find the rgb output is all white. This is my command:
python train_nerd.py --datadir /root/Data/synthetic/Chair/ --basedir ~/Experiments --expname 03_nerd_official --gpu 0,1,2,3 --config configs/nerd/blender.txt

I checked that the ground truth image can be correctlly read. When debugging, I find that 'target' and 'target_mask' are all correct but payload['rgb'] always output 1.0 after training. Any ideas? Thanks a lot!

OpenEXR images are broken in released datasets

Hello, thank you for your great work!

I executed download_datasets.py and downloaded image datasets, but I found OpenEXR images in synthetic directory are broken. Does anybody have the same problem?

tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists.

When running the training script right after following the installation instructions this error occurs:

  File "train_nerd.py", line 9, in <module>
    import dataflow.nerd as data
  File "/tmp/NeRD-Neural-Reflectance-Decomposition/dataflow/nerd/__init__.py", line 4, in <module>
    from dataflow.nerd.dataflow import add_args, create_dataflow
  File "/tmp/NeRD-Neural-Reflectance-Decomposition/dataflow/nerd/dataflow.py", line 3, in <module>
    import tensorflow_addons as tfa
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow_addons/__init__.py", line 21, in <module>
    from tensorflow_addons import activations
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow_addons/activations/__init__.py", line 17, in <module>
    from tensorflow_addons.activations.gelu import gelu
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow_addons/activations/gelu.py", line 19, in <module>
    from tensorflow_addons.utils.types import TensorLike
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow_addons/utils/types.py", line 24, in <module>
    from keras.engine import keras_tensor
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/__init__.py", line 25, in <module>
    from keras import models
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/models.py", line 20, in <module>
    from keras import metrics as metrics_module
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/metrics.py", line 26, in <module>
    from keras import activations
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/activations.py", line 20, in <module>
    from keras.layers import advanced_activations
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/layers/__init__.py", line 23, in <module>
    from keras.engine.input_layer import Input
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/engine/input_layer.py", line 21, in <module>
    from keras.engine import base_layer
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/engine/base_layer.py", line 43, in <module>
    from keras.mixed_precision import loss_scale_optimizer
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 18, in <module>
    from keras import optimizers
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/optimizers.py", line 26, in <module>
    from keras.optimizer_v2 import adadelta as adadelta_v2
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/optimizer_v2/adadelta.py", line 22, in <module>
    from keras.optimizer_v2 import optimizer_v2
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/keras/optimizer_v2/optimizer_v2.py", line 36, in <module>
    keras_optimizers_gauge = tf.__internal__.monitoring.BoolGauge(
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow/python/eager/monitoring.py", line 360, in __init__
    super(BoolGauge, self).__init__('BoolGauge', _bool_gauge_methods,
  File "/usr/local/google/home/martinarroyo/miniconda3/envs/nerd-tmp/lib/python3.8/site-packages/tensorflow/python/eager/monitoring.py", line 135, in __init__
    self._metric = self._metric_methods[self._label_length].create(*args)
tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists.

This occurs directly in the imports, and it seems to be fixed by upgrading the required version of TensorFlow from 2.6.0 to 2.6.2. Since this does only affect the patch version number, it should not introduce any breaking changes.

Running Error

Hi, it's quite a nice work you have made. So I tried to re-implement it based on the code and command you published. However, when I directly runs your code on the synthetic data, following your commands. The generated testing images are all white, here is my running command:

nohup python -u train_nerd.py --datadir /home/ma-user/work/zhl/nerd_pro/nerd_data/synthetic/Car --basedir /home/ma-user/work/zhl/nerd_pro/nerd --expname car --gpu 0,1,2,3 --spherify --config configs/nerd/blender.txt > logs/car4.out 2>&1 &

Where the 'basedir' is the directory that contains 'train_nerd.py'. I tried both to add or not add the 'spherify' option, but the generated testing images are all white. Could you give me some suggestions?

Relighted image generation

Congrats on such great work!

In Figure A6 as well as Figure 4 of the paper, a relighted scene under envmap and point light for specific object is captured. Would someone kindly explain to me how is the relighted image generated? Do we need to use a separate rendering for this step or can we get this result from the trained model?

SG VS SH

Hi, I'm so interested in your NeRD. But I'm confused about that why use SG instead of SH ? Are there any differences?

Question: what is the license for the datasets?

The license for this project is clearly stated as MIT, but I am not sure if this extends to the data sequences as well, since they are located in different directories. Could you clarify this? Thanks in advance!

Unrealistic training times (~5 days per scene)

This paper is simply brilliant, however the implementation appears to require at least 4-5 parallel GPUs. On a single GPU (RTX 2080), about one set of course / fine samples + illuminations are generated per hour. The args file shows fine_samples set to 128. I presume this means, at last on my hardware, that this model will take well over 5 days to train vs. ~30 minutes with the latest photogrammetry tools (with GPU-supported mesh lighting). Is there any way to make results more accessible without spending $10k+ on hardware / cloud services? Even with, say, 5 Tesla V100s, the training time for a small scene will still be over 15 hours. The value-time tradeoff is uneven. Surely, there must be a way to close this gap.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.