xhuangcv / hdr-nerf Goto Github PK
View Code? Open in Web Editor NEWThe official implementation of CVPR 2022 paper: HDR-NeRF: High Dynamic Range Neural Radiance Fields
License: MIT License
The official implementation of CVPR 2022 paper: HDR-NeRF: High Dynamic Range Neural Radiance Fields
License: MIT License
Try to run your code for trainning on your dataset. When code goes to warm_crf, error occurred: Generator type has no operator "+";
While, you just add with "+" model.exps_linears_r.parameters() + model.exps_linears_g.parameters();
But nn.Module.parameters() method returns a iterator of parameters.
I guess you were going to apend all parameters into a list and pass to optimizer.
So, you probably should write it like:
params_to_train = [
{'params':model.exps_linears_r.parameters()},
{'params':model.exps_linears_g.parameters()},
{'params':model.exps_linears_b.parameters()},
{'params':model.r_l_linner.parameters()},
{'params':model.g_l_linner.parameters()},
{'params':model.b_l_linner.parameters()},
]
optimizer_crf = torch.optim.Adam(params_to_train, lr=5e-4)
Please let me know if I was thinking wrong.
Hi @shsf0817,
Apologies but please add this in your requirements.txt file
pip install tensorboardX
Thanks
作者你好,感谢你的工作和开源的代码,我给你点上 star 了
我在读你文章的过程中,遇到了以下几个问题,希望能得到你的解答:
(i) 首先是 Unit exposure loss. 我对这个 loss 的理解是在 e △t = 1 时加一个限制,让此时的 color 约束在pixel value 中位数。请问你是如何保证 e △t = 1 的呢?不知道我理解得对不对哈 :)
(ii) 第二个问题是为什么不直接采用一个显式的 tone mapping 函数把 HDR 图像转成 LDR 图像,比如说,采用补充材料中的公式 (16),把 EV 直接替换成 △t。这样子不是参数量更小,速度更快,解释性更强,更好控制吗?以及 关于 M(E) 的公式,Eq. (14) 和 Eq. (15) 为什么要采用两种不同的表达式呢?公式 (14) 只是为了做 evaluation 吗?
Hi, could you please explain the unit exposure loss in the paper?
In the paper, eq(8) says "C = g(ln(e) + ln(t))" and eq(12) says "Lu = l2(g(0) -C0)"
When ln(e) + ln(t) == 0, both e and t are 1. If this is the case, should we calculate C0 to be the average pixel value when t == 1ms?
Hello, I am interested in your related work. But I am facing this error when I run the code "python run_nerf.py --config configs/chair.txt"
Error:
" Traceback (most recent call last):
File "run_nerf.py", line 788, in
train()
File "run_nerf.py", line 743, in train
render_path(torch.Tensor(poses[i_test]).to(device), torch.Tensor(exps_source[i_test]).to(device), hwf, K, args.chunk, render_kwargs_test,
File "run_nerf.py", line 205, in render_path
imageio.imwrite(filename3, rgbs_h[-1])
File "/home/z3/anaconda3/envs/HDR-NeRF/lib/python3.8/site-packages/imageio/core/functions.py", line 303, in imwrite
writer = get_writer(uri, format, "i", **kwargs)
File "/home/z3/anaconda3/envs/HDR-NeRF/lib/python3.8/site-packages/imageio/core/functions.py", line 226, in get_writer
raise ValueError(
ValueError: Could not find a format to write the specified file in single-image mode"
Which are the training views and which are the testing views for the real data?
The images are roughly in 5x7 grid but the order is not usual.
HI @shsf0817,
whenever I run this command
python3 run_nerf.py --config configs/flower.txt
I get this output on the ubuntu terminal
killed
could you help?
HI @shsf0817,
I am planning to use some of the images from your dataset in my thesis/publication, of course I will cite your paper and dataset. Do I require some license from you for your dataset?
Hello @shsf0817,
This is amazing work. I am facing this error after running command
python3 run_nerf.py --config configs/flower.txt
Error
Traceback (most recent call last):
File "/home/adwait/hdr-nerf/run_nerf.py", line 790, in <module>
train()
File "/home/adwait/hdr-nerf/run_nerf.py", line 458, in train
images, poses, bds, exps_source, render_poses, render_exps, i_test = load_real_llff_data(args.datadir, args.factor,
File "/home/adwait/hdr-nerf/load_real_llff.py", line 236, in load_real_llff_data
poses, bds, exp, imgs = _load_data(basedir, factor=factor) # factor=8 downsamples original imgs by 8x
File "/home/adwait/hdr-nerf/load_real_llff.py", line 61, in _load_data
poses_arr = np.load(os.path.join(basedir, 'poses_bounds_exps.npy'))
File "/home/adwait/.local/lib/python3.10/site-packages/numpy/lib/npyio.py", line 390, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/your_data_path/flower/poses_bounds_exps.npy'
Please could you help?
Hi @shsf0817,
Apologies for creating an issue again.
The two lines you can see below are deprecated.
from skimage.measure import compare_ssim
from skimage.measure import compare_psnr
The functions compare_ssim
and compare_psnr
are changed in the skimage
library.
These lines are to be changed with
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio
For more info, see this issue: scikit-image/scikit-image#3567
In load_real_llff.py, you use $cp imgdir_orig imgidr to produce input_images_4.
It seems not reasonable.
I think we should actually scale the original images and then put them into the folder input_images_4, right?
Hi! I would be greatly helped if you could tell me which OS this was run on? I ran through WSL and the packages were not installed. However, using anaconda prompt on windows installed all the packages but the command for rendering a demo fails: (python3 run_nerf.py --config configs/demo.txt --render_only
)
Thank you
Hi,
I am attracted by the HDR-NeRF could output the high-quality HDR image. I want to know HDR-NeRF whether if work in without different poses sample.
Thanks!
Best Wishes!
I'm working on a similar nerf reconstruction problem, and each view contains 20 different exposure of images, and I randomly select 2 or 3 images for each view.
Here's the problem, nerf got trapped into local minima, like overfitting, in order to get less loss at each view, I wonder how you overcome the overfitting problem (exposure ambiguity) at each view
When I use colmap to process the synthetic scene chair
, I find that the focal length is far away from that computed by camera_angle_x
in transforms_train.json
. Then I check the blender files for all 8 synthetic scenes. 5 scenes are correct while 3 scenes (chair, diningroom, sponza
) are wrong.
1.chair
: blender data is CAMERA_FOV = 33.8984° (0.5916387 in radian) while json data is camera_angle_x = 0.8575548529624939 in radian.
2.diningroom
: blender data is CAMERA_FOV = 48.498° (0.8464498 in radian) while json data is camera_angle_x = 1.1884186267852783 in radian.
3.sponza
: blender data is CAMERA_FOV = 22.8952° (0.39959663 in radian) while json data is camera_angle_x = 0.5897874236106873 in radian.
When using the blender data instead json data for the 3 scenes, I can obtain focal length close to the colmap results.
Could you share the code to prepare a new dataset from JPEGs?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.