postech-cvlab / scnerf Goto Github PK
View Code? Open in Web Editor NEW[ICCV21] Self-Calibrating Neural Radiance Fields
License: MIT License
[ICCV21] Self-Calibrating Neural Radiance Fields
License: MIT License
Hello author,
Thank you for sharing your code. I want to capture the corresponding camera pose by taking images of a circle. For example, I take a picture every 10 degrees. After I train the network, I find the results in logs are some images and some .tar file. Can I get some information about the camera pose like a 4x4 matrix?
Looking forward to your reply.
Hi,
thanks for your great work! i have some questions about applying this work on some large scale datasets.
1.In your paper you did ablation study on LLFF datasets about IE OD PRD, did you do the ablation study on Tank and Temple dataset ?
2.Is it feasible to apply this work on some large scale datasets without initial poses by colmap?
3.In BARF: Bundle-Adjusting Neural Radiance Fields paper, they mention that it's hard to optimize nerf and poses because of the position encoding, in your experienments, do you think it is necessary to change the position encoding function as BARF said ?
@chrischoy @minsucho @soskek @joonahn
Looking forward to your reply!
Hi,
Thanks for the repo. I was trying to run SCNeRF with only images, but after looking at the code and issues related, it seems like I need to run colmap_utils script nontheless. But there are several errors trying to run the script:
File "/home/SCNeRF/colmap_utils/read_sparse_model.py", line 378, in main
depth_ext = os.listdir(os.path.join(args.working_dir, "depth"))[0][-4:]
and
File "/home/SCNeRF/colmap_utils/post_colmap.py", line 33, in load_colmap_data
with open(os.path.join(realdir, "train.txt"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/data/TUM_desk_rgb/train.txt'
I checked the code, I think the error happens because there is no depth
output from colmap directly, and no idea what is train.txt
. Could you double check the provided script work for a pure rgb dataset all the way through?
And if possible, it would be very helpful if you could provide a more detailed guide on how to run with only image inputs.
Thanks in advance!
Two questions, if you don't mind:
Loaded SuperPoint model
Loaded SuperGlue model ("outdoor" weights)
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter:
wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc
wandb: ERROR Error while calling W&B API: project not found (<Response [404]>)
Thread SenderThread:
Traceback (most recent call last):
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/lib/retry.py", line 102, in call
result = self._call_fn(*args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/internal/internal_api.py", line 138, in execute
six.reraise(*sys.exc_info())
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/six.py", line 719, in reraise
raise value
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/internal/internal_api.py", line 132, in execute
return self.client.execute(*args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/client.py", line 52, in execute
result = self._get_result(document, *args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/client.py", line 60, in _get_result
return self.transport.execute(document, *args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/transport/requests.py", line 39, in execute
request.raise_for_status()
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api.wandb.ai/graphql
Hi
First of all thank you very much for uploading your code
I think you have a typo on the requirement.txt file, should not it be named "requirements.txt"?
On the other hand, for running the demo.sh at least, the softlink should be created to ./data/, and not to data/nerf_llff_data, otherwise an error will be raised
Thanks,
Hello. Thank you for great paper and code.
I have one small question.
The threshold value of projected ray distance loss is set to 5, is there a reason why you chose this value?
Also, is this threshold value used even when the camera parameters are initialized with identity matrix and zero vector? (Self-Calibration experiments)
When the camera parameter is coarse or has a bad value, I think the proj_ray_dist loss will be much larger than 5, but wasn't it?
I wonder if this threshold works.
Thank you!
Thank for your great works! I want to know how "multiplicative_noise" infrences results. Will results be better if I use add it for training? I find that you set it to be "True" in all experiments. Thanks in advance!
Reference codes are here: https://github.com/POSTECH-CVLab/SCNeRF/blob/master/model/camera_model.py#L166
Q1
Is it true that each element of n is divided by c ? not f?
Also, what is the meaning of p' value?
undistorted pixel?
Q2
In this equation, I'm not sure why
Q3
In this equation, why divide it by
Q4
In this equation, round L in the last term should be round r?
Hi, I found that you used SuperGlue and SuperPoint as feature extractor and matcher, as far as I know, these two algorithms are trained supervisedly, Is there any suspicion of data leakage here? This approach may affect the fairness of your experiment, because the Colmap-based pose information is not data-driven, and your method somehow references external extra data unless your experimental data is entirely based on SIFT and Bfmatcher.
Thank you for sharing your code.
I'm trying to reproduce the results in the main 1 table.
Now I fully trained NeRF results (not 'ours' results) and all of the values are showing slightly worse than the values in the table.
Following is the Test Set Results / Train Set Result / Result in the paper.
test | psnr | ssim | lpips | prd | |
---|---|---|---|---|---|
nerf | flower | 13.628 | 0.2909 | 0.7835 | nan |
nerf | fortress | 15.618 | 0.4311 | 0.6794 | nan |
nerf | leaves | 12.734 | 0.1451 | 0.7938 | nan |
nerf | trex | 12.419 | 0.3743 | 0.6729 | nan |
train | psnr | ssim | lpips | prd | |
---|---|---|---|---|---|
nerf | flower | 13.062 | 0.2887 | 0.8028 | nan |
nerf | fortress | 13.539 | 0.3868 | 0.7249 | nan |
nerf | leaves | 12.38599 | 0.143 | 0.819662 | nan |
nerf | trex | 12.58406 | 0.425573 | 0.692024 | nan |
paper | psnr | ssim | lpips | prd | |
---|---|---|---|---|---|
nerf | flower | 13.8 | 0.302 | 0.716 | nan |
nerf | fortress | 16.3 | 0.524 | 0.445 | nan |
nerf | leaves | 13.01 | 0.18 | 0.687 | nan |
nerf | trex | 15.7 | 0.409 | 0.575 | nan |
Can I get a clue?
Also, I wonder which dataset is used for the table among train/val/test set
Hi @jeongyw12382 ,
I have a set of images. However, I am aware of the FOV and θ, Φ 3D angles for each image.
Would it be possible for me train the NeRF model without COLMAP?
Unfortunately, colmap doesn't work well on my dataset. I get an error saying:
ERROR: the correct camera poses for current points cannot be accessed
I have two streams taken from cameras aligned vertically that I have no information about. The streams look like the following:
Camera 2:
where the plant rotates, so it is the only moving object in the scene.
I wanted to give your method a try to obtain the intrinsics of these cameras (so that I can 3d reconstruct plant), and therefore turned the video streams into images (.png) and executed the following on the camera1's images:
bash colmap_utils/colmap.sh ./images/
However, I received an output that I could not make any sense of:
colmap_utils/colmap.sh: line 7: colmap: command not found
colmap_utils/colmap.sh: line 13: colmap: command not found
colmap_utils/colmap.sh: line 19: colmap: command not found
Traceback (most recent call last):
File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 417, in <module>
main()
File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 369, in main
cameras, images, points3D = read_model(path=model_path, ext=".bin")
File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 305, in read_model
cameras = read_cameras_binary(os.path.join(path, "cameras" + ext))
File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 120, in read_cameras_binary
with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: './images/sparse/0/cameras.bin'
Post-colmap
Traceback (most recent call last):
File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 266, in <module>
gen_poses(args.working_dir)
File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 247, in gen_poses
poses, pts3d, perm = load_colmap_data(basedir)
File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 13, in load_colmap_data
camdata = read_model.read_cameras_binary(camerasfile)
File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 120, in read_cameras_binary
with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: './images/sparse/0/cameras.bin'
What am I doing wrong?
Thanks for you wonderful code first,
I'd like to reproduce result on your paper at supplementary Table 3.
But flower dataset with demo.sh goes little bit different way.
Could I get some advice for reproduce setup?
this is wandb link in my environment.
Thanks
Hello, thanks for your great work.
I am interested in training the FishEyeNeRF dataset with the NeRF++ model.
Specifically, I would like to reproduce the NeRF++[RD] presented in Table 4 of the paper using the code you provided.
Would this be possible?
Thanks
Training Done
Starts Train Rendering
0%| | 0/17 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/run_nerf.py", line 1047, in
train()
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/run_nerf.py", line 973, in train
rgbs, disps = render_path(
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/render.py", line 157, in render_path
rgb, disp, acc, _ = render(
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/render.py", line 44, in render
idx_in_camera_param=np.where(i_map==image_idx)[0][0]
IndexError: index 0 is out of bounds for axis 0 with size 0
Hi Mr,
I would like to run an experiment with your model using a list of pictures from an object to get the estimated camera poses of each picture. How can I mount that experiment?
Thanks in advance,
In scripts/main_table_2/fern/main2_fern_ours.sh, last line is:
--ft_path logs/main1_fern_nerf/200000.tar
which means using main1_fern_nerf to init model.
but this 200000 iter in table1 nerf setting is trained with --run_without_colmap both
,
and in paper the Table2 result is initialized by COLMAP camera information. So the first 200000 iter should be trained with --run_without_colmap none
, instead of --run_without_colmap both
,
According to the description above, there will be conflicts.
And I think maybe it should be
--ft_path logs/main2_fern_nerf/200000.tar
?
All the scripts mentioned are training the NeRF on a dataset and evaluating them.
Is it possible to run the pretrained model on any set of images captured by a camera?
Or is the pipeline such that the end-end training has to happen for every set of camera images?
Hi,
Thanks for sharing your work!
I am confused about the downsampling factor in the LLFF dataset. In your config file, downsample factor is set to 8. However, in the original nerf repo, downsample factor is set to 4 according to this. I am curious that why do not follow the original setting. This may be unfair for comparision.
Hi @joonahn,
This is amazing work! If you don't mind, may I ask you what tool have you used to draw this illustration?
Hi, thank for your code and works!
I want to know how the estimated parameters "ray_o_noise" and "ray_d_noise" can be used in camera undistortion?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.