ghixu / geo-neus Goto Github PK
View Code? Open in Web Editor NEWGeo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction (NeurIPS 2022)
License: MIT License
Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction (NeurIPS 2022)
License: MIT License
I have a doubt related to the Photometric loss.
Am I correct in thinking that, the surface points that is calculated through bilinear interpolation isn't a function of network parameters. As such gradients of Photometric loss only flows through the normal at those points ?
in which case:
What are your thoughts on the impact of making surface depth values and as such their corresponding points differentiable, using methods proposed by DVR or IDR ?
Dear authors,
Thank you for your contribution. In testing your code, I assume that the cameras.npz
files follow the format from NeuS and IDR. However, you also load the sparse pointcloud in the points.npy
and view_id.npy
. Could you please provide data samples and/or explain the structure of these files?
Futhermore you use a pairs.txt
file which I assume is a precomputed set of close source views for each reference views. How is this computed?
Thanks for your excellent work.
I used the default parameter settings in the repo,and got a very different performance on the DTU dataset worse than in the paper.
scan24 0.446 , but your paper is 0.375.
scan37 0.968,but your paper is 0.537.
Do you have any suggestions?
Thanks for your work! could you tell me when the pre-trained model will be released?
Hello Author! I noticed that when loading the camera poses, the scale and position of the poses are spatially normalized using a scale mat to wrap the whole scene in a unit sphere, but I noticed that the sparse point cloud provided does not have such a normalization operation, whether the provided points.npy itself already did it?Otherwise it doesn't feel like it corresponds to the distribution of camera poses.
Hi,
I have a doubt related to the sampling strategy used in Photometric and RGB Loss
Photometric losses require patches to calculate the NCC.
Are you obtaining the patches by rendering only patches... and using the rendered values for calculating both NCC and RGB loss ?
OR
Are you rendering random pixels (which you use for RGB loss)
and then take patches only around those pixels which intersect the object, and the RGB values of the patch comes from the reference image (and not by rendering the remaining pixels) ?
Hello, thank you very much for your work. I came across a puzzle. When I run "python eval.py --conf./confs/womask.conf --case DTU/scan24", an error occurs and the error message is: "FileNotFoundError: [2] Errno No to the file or directory: '.. / public_data/DTU/scan24 / ObsMask/ObsMask24_10 mat '".
So I want to ask how to solve this problem? Or ObsMask24_10.mat how do I get this file? I checked the file carefully, and it doesn't seem to be there. Hope to get your help, thank you very much!
Hello, when I ran the code for eval. py and used the DTU dataset, I encountered the following problem,May I know how to solve it?
compute data2stl: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/9 [01:01<00:30, 10.16s/it]Traceback (most recent call last):
File "eval.py", line 27, in
dtu_eval.eval(inp_mesh_path, int(scene), "/root/autodl-tmp/NeUDF-main/eval", eval_dir, args.suffix)
File "/root/autodl-tmp/NeUDF-main/evaluation/dtu_eval.py", line 120, in eval
dist_d2s, idx_d2s = nn_engine.kneighbors(data_in_obs, n_neighbors=1, return_distance=True)
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/neighbors/_base.py", line 804, in kneighbors
X = self._validate_data(X, accept_sparse="csr", reset=False, order="C")
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/base.py", line 604, in _validate_data
out = check_array(X, input_name="X", **check_params)
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/utils/validation.py", line 969, in check_array
raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required by NearestNeighbors.
Hi, it's really a nice job.
But, how do you get the color bias phenomenon in Neus.
I have tried to render views by the interpolation operation you proposed in your paper.
But the color bias phenomenon as shown in Fig1 in your paper is still not observed.
Hi, thanks for a great work!
I want to know how many source views have been used in geometry loss for a reference view, and how do you select them?
Hello, I want ask you some questions about "dataset:"data/epfl"" in eval.py. I'd like to ask if the paths are auto-generated or locally available. I would appreciate it if you could reply.
Hi, all. Thank you for your brilliant work in NeurIPS 2022. Congratulations!
I strictly follow your guidance to install the environment and run the code.
But I found that some scenes in DTU cannot converge normally.
And others cannot match the quantitative metric you reported in your paper.
I want to know whether there are potential reasons to incur this issue.
Thanks for your time, and I am looking forward to your reply.
Hi,
In your code, you multiply the ncc
values by the mid_inside_sphere
. Could you please explain the purpose of this?
Hello Author! According to equation 22 in the paper, the denominator should have a root sign after calculating the variance of the refer and source image's patches, however, I checked the code corresponding to this section and it seems that there is no root sign for the denominator, so I came to ask if the code is written wrong?
Line 553 in bb17c75
Hello, thank you for your work, but how can I use the eval. py file for testing? The dataset does not provide true values.
Hello author, I followed your steps to copy the code. My command is as follows. First, execute python exp_runner.py --mode train --conf ./confs/womask.conf --case scan37, and then in that The effect of 00300000_0_40.png in the validation is not good; also, the three-dimensional reconstruction effect of the ply file I generated when extracting the surface is not very optimistic. Looking forward to your reply.
Hi, thank you for your great work!
I was testing Geo-Neus on my own data and noticed that, on each iteration of training, you sample rays for a random image image_perm[self.iter_step % len(image_perm)]
at exp_runner.py#L126, but sample guidance points for a generally different image self.iter_step % len(image_perm)
at exp_runner.py#L145-148.
So, I wonder if this is a bug or a feature --- could you please comment on that?
Hi, congratulations on your acceptance to NeurIPS! Any idea on when the code will be made available? Thanks!
Hello. Thanks for your great work! I have a question about preprocessing data. I am curious about how to obtain the file points.npy.
My preprocessing step as follow:
Thanks for your great work. I want to use it in my own data, could you give some advice on how to generate points.npy and view_id.npy from COLMAP outputs?
Thank you for your impressive work!
Now I have obtained the sparse point cloud through COLMAP. Do you have any suggestions on how to obtain visible points for each view?
Hi @GhiXu, thank you for the work.
Will pre-trained models be provided to re-produce the reported numbers?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.