Coder Social home page Coder Social logo

geo-neus's People

Contributors

ghixu avatar qiancheng-fu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geo-neus's Issues

Differentiable Depth Values in Photometric Loss

I have a doubt related to the Photometric loss.

Am I correct in thinking that, the surface points that is calculated through bilinear interpolation isn't a function of network parameters. As such gradients of Photometric loss only flows through the normal at those points ?

in which case:
What are your thoughts on the impact of making surface depth values and as such their corresponding points differentiable, using methods proposed by DVR or IDR ?

Computing points.npy, view_id.npy, pairs.txt

Dear authors,

Thank you for your contribution. In testing your code, I assume that the cameras.npz files follow the format from NeuS and IDR. However, you also load the sparse pointcloud in the points.npy and view_id.npy. Could you please provide data samples and/or explain the structure of these files?

Futhermore you use a pairs.txt file which I assume is a precomputed set of close source views for each reference views. How is this computed?

performance in DTU

Thanks for your excellent work.

I used the default parameter settings in the repo,and got a very different performance on the DTU dataset worse than in the paper.

scan24 0.446 , but your paper is 0.375.
scan37 0.968,but your paper is 0.537.

Do you have any suggestions?

Doubts about the scale transformation for the sparse points

Hello Author! I noticed that when loading the camera poses, the scale and position of the poses are spatially normalized using a scale mat to wrap the whole scene in a unit sphere, but I noticed that the sparse point cloud provided does not have such a normalization operation, whether the provided points.npy itself already did it?Otherwise it doesn't feel like it corresponds to the distribution of camera poses.

Question on Sampling Strategy

Hi,
I have a doubt related to the sampling strategy used in Photometric and RGB Loss

image

Photometric losses require patches to calculate the NCC.

Are you obtaining the patches by rendering only patches... and using the rendered values for calculating both NCC and RGB loss ?

OR

Are you rendering random pixels (which you use for RGB loss)
and then take patches only around those pixels which intersect the object, and the RGB values of the patch comes from the reference image (and not by rendering the remaining pixels) ?

An unknown error occurred when running python eval.py

Hello, thank you very much for your work. I came across a puzzle. When I run "python eval.py --conf./confs/womask.conf --case DTU/scan24", an error occurs and the error message is: "FileNotFoundError: [2] Errno No to the file or directory: '.. / public_data/DTU/scan24 / ObsMask/ObsMask24_10 mat '".
So I want to ask how to solve this problem? Or ObsMask24_10.mat how do I get this file? I checked the file carefully, and it doesn't seem to be there. Hope to get your help, thank you very much!

eval

Hello, when I ran the code for eval. py and used the DTU dataset, I encountered the following problem,May I know how to solve it?
compute data2stl: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/9 [01:01<00:30, 10.16s/it]Traceback (most recent call last):
File "eval.py", line 27, in
dtu_eval.eval(inp_mesh_path, int(scene), "/root/autodl-tmp/NeUDF-main/eval", eval_dir, args.suffix)
File "/root/autodl-tmp/NeUDF-main/evaluation/dtu_eval.py", line 120, in eval
dist_d2s, idx_d2s = nn_engine.kneighbors(data_in_obs, n_neighbors=1, return_distance=True)
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/neighbors/_base.py", line 804, in kneighbors
X = self._validate_data(X, accept_sparse="csr", reset=False, order="C")
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/base.py", line 604, in _validate_data
out = check_array(X, input_name="X", **check_params)
File "/root/miniconda3/envs/neudf/lib/python3.8/site-packages/sklearn/utils/validation.py", line 969, in check_array
raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required by NearestNeighbors.

about color bias in NeuS

Hi, it's really a nice job.
But, how do you get the color bias phenomenon in Neus.
I have tried to render views by the interpolation operation you proposed in your paper.
But the color bias phenomenon as shown in Fig1 in your paper is still not observed.

Question about source views

Hi, thanks for a great work!
I want to know how many source views have been used in geometry loss for a reference view, and how do you select them?

about dataset:"data/epfl"

Hello, I want ask you some questions about "dataset:"data/epfl"" in eval.py. I'd like to ask if the paths are auto-generated or locally available. I would appreciate it if you could reply.

Reproductivity of released code

Hi, all. Thank you for your brilliant work in NeurIPS 2022. Congratulations!

I strictly follow your guidance to install the environment and run the code.

But I found that some scenes in DTU cannot converge normally.
And others cannot match the quantitative metric you reported in your paper.

I want to know whether there are potential reasons to incur this issue.

Thanks for your time, and I am looking forward to your reply.

Neus performance is different from the Neus Paper

Hi, I've noticed that your reported performance on DTU dataset in Tab 1 in your paper is different from the performance reported in the Neus Paper. Could you explain why?
This is table 1 in Geo-Neus paper
image
This is table 1 in NeuS paper
image

Forget to find the root of the denominator when calculating NCC?

Hello Author! According to equation 22 in the paper, the denominator should have a root sign after calculating the variance of the refer and source image's patches, however, I checked the code corresponding to this section and it seems that there is no root sign for the denominator, so I came to ask if the code is written wrong?

cc = cross * cross / (ref_var * src_var + 1e-5) # [batch_size, nsrc, 1, npatch]

Evaluation

Hello, thank you for your work, but how can I use the eval. py file for testing? The dataset does not provide true values.

The code for reproducing the paper found that the results were a bit poor.

Hello author, I followed your steps to copy the code. My command is as follows. First, execute python exp_runner.py --mode train --conf ./confs/womask.conf --case scan37, and then in that The effect of 00300000_0_40.png in the validation is not good; also, the three-dimensional reconstruction effect of the ply file I generated when extracting the surface is not very optimistic. Looking forward to your reply.
00300000_0_40
image

Inconsistency between the view used in the SDF loss term and the other loss terms

Hi, thank you for your great work!

I was testing Geo-Neus on my own data and noticed that, on each iteration of training, you sample rays for a random image image_perm[self.iter_step % len(image_perm)] at exp_runner.py#L126, but sample guidance points for a generally different image self.iter_step % len(image_perm) at exp_runner.py#L145-148.

So, I wonder if this is a bug or a feature --- could you please comment on that?

Code release?

Hi, congratulations on your acceptance to NeurIPS! Any idea on when the code will be made available? Thanks!

Error when performing eval.py

1681869762380

您好,我在执行您提供的eval.py的时候,出现了如下报错:No such file or directory: '/home/yuanzikang/project/geo-neus/exp/DTU/ObsMask/ObsMask24_10.mat'

我检查了相应路径下确实没有这个文件。但我执行了eval.py的前两条指令,也没有发现有output这个文件,请问是什么问题呢?

About preprocessing & points.npy

Hello. Thanks for your great work! I have a question about preprocessing data. I am curious about how to obtain the file points.npy.
My preprocessing step as follow:

  1. follow the custom preprocessing steps of NEUS, I obtained the sparse_points.ply
  2. remov the noise points to get sparse_points_interest.ply.
  3. normalized sparse_points_interest.ply and saved it as a npy file.
    I conducted experiments using DTU scan63 as an example, but the results were not as good as the official data. I hope you can share how Geo-Neus obtains the points.npy and view_id.npy files, as this would be of great help to me. Thank you very much!

Question about visible points for each view

Thank you for your impressive work!
Now I have obtained the sparse point cloud through COLMAP. Do you have any suggestions on how to obtain visible points for each view?

Pre-trained models?

Hi @GhiXu, thank you for the work.

Will pre-trained models be provided to re-produce the reported numbers?

table1

Hello, may I ask how the evaluation of the DTU dataset in Table 1 of the paper was calculated, and is there any relevant code?
OGVT11TMA_B~ 584{%`VA3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.