Coder Social home page Coder Social logo

wildlight's Introduction

WildLight: In-the-wild Inverse Rendering with a Flashlight (CVPR 2023)

Teaser

Dependencies

Conda is recommended for installing all dependencies

conda env create -f environ.yaml
conda activate wildlight

Data convention

Input data is organized in a single folder, where images are saved as exr/png files similar to NeuS style, OR packed within a single npy file

<case_name xxx>
|-- cameras_sphere.npz    # camera & lighting parameters

|-- images.npy
Or
|-- image
    |-- 000.exr        # target image for each view, either in exr or png format
    |-- 001.exr
    Or
    |-- 000.png        
    |-- 001.png
    ...
|-- mask [optional]
    |-- 000.png        # target mask each view, if available
    |-- 001.png
    ...

Camera and lighting parameters are stored in cameras_sphere.npz with following key strings:

  • world_mat_x: $K_x[R_x|T_x]$ projection matrix from world coordinates to image coordinates
  • scale_mat_x: Sim(3) transformation matrix from object coordinates to world coordinates; we will only recover shape & material inside a unit sphere ROI in object coordinates. Usually this matrix is static accross all views.
  • light_energy_x: an RGB vector for flashlight intensity per view. If using a fixed power flashlight, this is set to $(1,1,1)$ for images under flashlight, or to $(0,0,0)$ for images without flashlight.
  • max_intensity: [optional] a scalar indicating maximum pixel density (e.g. 255 for 8-bit images), defaults to inf

Config

Model and traning parameters are written into config files under confs/*.conf. We provide three configurations for our datasets: confs/synthetic.conf and confs/synthetic_maskless.conf for our synthetic data, and confs/real.conf for real data.

Running

  1. Train. Run following line to download and train on the synthetic legocar object dataset. We provide a total of 7 objects: bunny, armadillo, legocar, plant (synthetic w/ ground turth) and bulldozer, cokecan and face (real scene, images only).
    python exp_runner.py --case legocar --conf confs/synthetic.conf --mode train --download_dataset
    Intermidiate results can be found under exp/legocar/masked/ folder.
  2. Mesh and texture export.
    python exp_runner.py --case legocar --conf confs/synthetic.conf --mode validate_mesh --is_continue
    This will export a UV-unwraped OBJ file along with PBR texture maps from last checkpoint, under exp/legocar/masked/meshes/XXXXXXXX_export (this might take a few minutes.
  3. Validate novel view rendering. A dataset_val must be provided in config.
    python exp_runner.py --case legocar --conf confs/synthetic.conf --mode validate_image --is_continue
    Results will be saved to exp/legocar/masked/novel_view/.

Results (rendered in blender)

teaser.mp4

Acknowledgement

This repo is heavily built upon NeuS. We would like to thank the authors for opening source. Special thanks goes to @wei-mao-2019, a friend and fellow researcher who agreed to appear in our dataset.

BibTex

@article{cheng2023wildlight,
  title={WildLight: In-the-wild Inverse Rendering with a Flashlight},
  author={Cheng, Ziang and Li, Junxuan and Li, Hongdong},
  journal={arXiv preprint arXiv:2303.14190},
  year={2023}
}

wildlight's People

Contributors

za-cheng avatar

Stargazers

 avatar Jinguang Tong avatar Andrey Smorodov avatar  avatar Snow avatar JEONG JIHYEOK avatar ZqlwMatt avatar Lnyan avatar liheng avatar  avatar Kim Yongmin avatar flan avatar James Zhang avatar  avatar cvhadessun avatar L.JIE avatar Godzilla avatar Chenghong Li avatar Nuri Ryu avatar Dakri Abdelmouttaleb avatar Lingzhe Zhao avatar Haiyao Xiao avatar  avatar Arkadeep Narayan Chaudhury avatar  avatar Ziqi Cai avatar  avatar Jooeun Son avatar Sang Min Kim avatar YuxiHu avatar Fan Fei avatar  avatar  avatar Slava Elizarov avatar YouSiki avatar Qianyue He avatar Yudong Jin avatar  avatar Stefan Baumann avatar Chief Accelerator avatar  avatar Sicheng Li avatar YiChenCityU avatar Yanhao Zhang avatar Cao Yukang avatar LinZhenYu avatar LiuZhuang avatar Chenyang LEI avatar Tianyuan Zhang avatar Xuan avatar Blake Senftner avatar Zhenhui Ye avatar YudongGuo avatar Lu Ming avatar Hyeontae Son avatar Ye Yukang avatar DS.Xu avatar yqdch avatar Vincent Ho avatar Sandalots avatar 爱可可-爱生活 avatar Yunsheng Luo avatar Yuliang Xiu avatar Xu Cao 曹旭 avatar  avatar  avatar  avatar Shan avatar  avatar Tong Wu avatar Jie Yang avatar  avatar

Watchers

L.JIE avatar YiChenCityU avatar Snow avatar Bozidar Antic avatar Bjoern avatar  avatar  avatar

wildlight's Issues

Where can I download the dataset?

Hi there! Congrats for the great work!
I am trying to compare your methods in my experiments. However, I didn't find the link to download your synthetic dataset to reproduce the results.
Also, is it possible to share your blender files for generating the synthetic data? So that I can modify the rendering setting to match my setup.

Thanks in advance!

Questions about the implementation of Disney BRDF

Hi, thanks for your code! I have some questions about the implementation of Disney BRDF.

  1. About the diffuse term:

base_diffuse = (1 + (F_D90 - 1)*(1-hz**5))**2 / pi

From the SIGGRAPH course note (Sec. 5.3), I think it should be (1-hz)**5 for the co-located setup. Is there a typo in the code?

  1. About the geometry term:

G = 2 / ( torch.sqrt(1 + roughness_sq * (1/hz_sq - 1)) + 1)

I read through the SIGGRAPH course note but cannot figure out the reason why the geometry term can be implemented like that. Can you explain more about the equation of the geometry term?

Thanks!

`download_dataset()` fails

Thanks for the user friendly setup of cloning, conda env installation and running.
However when first running I got the following error

$ python exp_runner.py --case legocar --conf confs/synthetic.conf --mode train --download_dataset
Hello Wooden
[connectionpool.py:1001 -            _new_conn() ] Starting new HTTPS connection (1): drive.google.com:443
[connectionpool.py:456 -        _make_request() ] https://drive.google.com:443 "GET /uc?id=1xJjPWSKT_CfTFVdvrxRXFgHSO_RZV7UN HTTP/1.1" 200 None
Access denied with the following error:

 	Cannot retrieve the public link of the file. You may need to change
	the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

	 https://drive.google.com/uc?id=1xJjPWSKT_CfTFVdvrxRXFgHSO_RZV7UN 

Traceback (most recent call last):
  File "exp_runner.py", line 677, in <module>
    runner = Runner(args.conf, args.mode, args.case, args.is_continue, args.download_dataset)
  File "exp_runner.py", line 58, in __init__
    self.download_dataset() # download dataset to default location
  File "exp_runner.py", line 162, in download_dataset
    gdown.extractall("datasets/legocar.zip", "datasets")
  File "/usr/wiss/haefner/miniconda3/envs/wildlight/lib/python3.8/site-packages/gdown/extractall.py", line 46, in extractall
    with opener(path, mode) as f:
  File "/usr/wiss/haefner/miniconda3/envs/wildlight/lib/python3.8/zipfile.py", line 1251, in __init__
    self.fp = io.open(file, filemode)
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/legocar.zip'

I had to manually download the data set in the end.

[Question] Is there any script to compute quantitative results under novel lighting, like Table 2 in the paper?

Hello, I want to know if there is a script to test the trained model under given novel lighting and view point?

As far as I know, this method itself does not relight the input object under novel lighting, so we need to export mesh first, import the mesh and materials into render engines like Blender, import the target lighting into Blender, and render the mesh.
Is there any script to help generating the results under novel lighting and compute the error, so I can produce a Table like Table 2 below in the paper?
图片

I would be very grateful if the authors could provide any help (e.g. their code to generate numbers in Table 2). But it is also completely understandable if the authors think it is improper to release or have no such code.
Thanks very much!

How to save base_color/albedo images?

I have modified your validate_image, render and render_core functions to extract base_color in novel view. But it looks strangely, is there any inference to save the results?
500000_0_0

face dataset result bad

I tried the face dataset but the result is so bad.

python exp_runner.py --case face --conf confs/real.conf --mode train --download_dataset
python exp_runner.py --case face --conf confs/real.conf --mode validate_mesh --is_continue
python exp_runner.py --case face --conf confs/real.conf --mode validate_image --is_continue

face

could you give me some tips?

Bug Report: Exporting all gray PBR texture maps (including base_color)

Hello, I am trying this code on my generated synthetic dataset (you can download it here), with some input images shown here:

After training with the config synthetic_maskless.conf for 1,000,000 iterations using the command
python exp_runner.py --case synthetic_car --conf confs/synthetic_maskless.conf --mode train,
the output validation images in exp\car\nomask\validations_fine seem totally fine:

However, when exporting the model to mesh using command
python exp_runner.py --case synthetic_car --conf confs/synthetic_maskless.conf --mode validate_mesh --is_continue,
the exported uv texture maps seem broken, generating all gray PBR texture maps including base_color:

Base color is as follows. Only a little pertubation can be seen, and most pixel values are near 128.

And here is the checkpoint file after training for 1,000,000 iters.

It seems like a bug occurs, but I failed to find something wrong inspecting the code myself.
Could the developers check if there is something wrong? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.