Coder Social home page Coder Social logo

yindaz / deepcompletionrelease Goto Github PK

View Code? Open in Web Editor NEW
555.0 555.0 141.0 9.45 MB

Deep Depth Completion of a Single RGB-D Image

Home Page: http://deepcompletion.cs.princeton.edu/

MATLAB 0.44% Makefile 0.17% C++ 31.00% C 65.71% Objective-C 0.43% Lua 2.26%

deepcompletionrelease's People

Contributors

yindaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepcompletionrelease's Issues

In demo_realsense.m,the problem "module 'nn' not found" occurs. How should I solve this problem? Thanks a lot!

module 'nn' not found:
no field package.preload['nn']
no file './nn.lua'
no file '/usr/share/luajit-2.1.0-beta3/nn.lua'
no file '/usr/local/share/lua/5.1/nn.lua'
no file '/usr/local/share/lua/5.1/nn/init.lua'
no file '/usr/share/lua/5.1/nn.lua'
no file '/usr/share/lua/5.1/nn/init.lua'
no file './nn.so'
no file '/usr/local/lib/lua/5.1/nn.so'
no file '/usr/lib/x86_64-linux-gnu/lua/5.1/nn.so'
no file '/usr/local/lib/lua/5.1/loadall.so'

Comparison to baseline inpainting methods

Hi,

Table 3 in the paper shows an evaluation result comparing to baseline inpainting methods.
However, I can't seem to find such evaluation code in evalDepth.m.
Could I ask where can I find the evaluation code for the inpainting methods?

Thanks in advance.

Best,
Jin

depth out of range

Hi,
I have noticed that you use 4000x depth image, but in my dataset, some points are 16m+. So, how to deal with these points? Or any suggestions to use 1000x depth images?
Thanks!

bad argument #1 to 'sub' (string expected, got nil)

/home/BH/by1706143/torch/install/bin/lua: main_test_bound_matterport.lua:21: bad argument #1 to 'sub' (string expected, got nil)
stack traceback:
[C]: in function 'sub'
main_test_bound_matterport.lua:21: in main chunk
[C]: in function 'dofile'
...6143/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?

An error occurred while running. What is the cause thank you

something wrong with my own depth map

I run the demo successfully, and the result looks good.But when I test my own depth map, it seems the program doesn't work. The following picture is the result shown in the matlab.
图片
And this is the input color.
图片
Another example:
图片
图片
The result is obviously incorrect. What's the reason? And how can I solve it?
The depth sensor is azure kinect.

Method - Optimization

Hi, I would like to see how 3.4. Optimization is done in the code.
According to README, it looks like depth2depth.cpp under ./gaps/apps directory contains the code.
However, I'm not sure where E is formed in the file.
I would appreciate if you could tell me where E is formed.
Thanks in advance.

No hdf5.h file

Hello, When I cd into the gaps directory, and make, it report error:
depth2depth error: hdf5:No such file or directory.
I have seen #5 ,but I still can't understand how to modify it. could you please help me ?
Thank you very much

Explanation of the optimization equation

In this paper,
I cannot guess the meaning of variables in Eq. (1).

I guess E_D is at least the sum of squares of depth errors.
What is v(p, q)? How can we pick p and q from image N for E_N and E_S?
What is T_{obs} in E_D?

Please give me an explanation with an example.

Epoch

DeepCompletionRelease/torch/BatchIterator.lua:
in function BatchIterator:nextEntry(set)
....
self[set].i = self[set].i + 1
...
I think this will make the variable "i" always be 1, from second epoch. Should I change this line to:
self[set].i = i + 1

Thank you!

cannot load '/home/user/torch/install/lib/lua/5.1/libcutorch.so'

/home/hy/torch/install/bin/luajit: /home/hy/torch/install/share/lua/5.1/trepl/init.lua:389: /home/hy/torch/install/share/lua/5.1/cutorch/init.lua:2: cannot load '/home/hy/torch/install/lib/lua/5.1/libcutorch.so'
stack traceback:
        [C]: in function 'error'
        /home/hy/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
        main_test_bound_realsense.lua:2: in main chunk
        [C]: in function 'dofile'
        ...e/hy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
        [C]: at 0x00405d50
/home/hy/torch/install/bin/luajit: /home/hy/torch/install/share/lua/5.1/trepl/init.lua:389: /home/hy/torch/install/share/lua/5.1/cutorch/init.lua:2: cannot load '/home/hy/torch/install/lib/lua/5.1/libcutorch.so'
stack traceback:
        [C]: in function 'error'
        /home/hy/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
        main_test_realsense.lua:2: in main chunk
        [C]: in function 'dofile'
        ...e/hy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
        [C]: at 0x00405d50

when run the demo demo_realsense.m under the directory ./matlab, I was confronted the problem as described above. I've checked the torch environment and installed cuda and cudnn for torch, the error still existed.

When I run the command th main_test_bound_realsense.lua -test_model ../pre_train_model/bound.t7 -test_file ./data_list/realsense_list.txt -root_path ../data/realsense/ outside matlab, it all went well, so I guess it is the problem related to matlab.

I've searched on the web and found a similar question: https://github.com/pytorch/pytorch/issues/7082 .

Is there any solution for this problem? Thanks a lot!

环境配置

请问运行环境具体是怎么配置的?谢谢

SUNCG-RGBD dataset

Hi,

I want to find the aggreement file of the SUNCG-RGBD dataset but it seems that the link shown in the GitHub page (https://www.princeton.edu/) does not have the information about the dataset. Could you provide the right link please? Thanks!

Best,
Zhen

normal estimation

@yindaz Hello, When I run the dem_realsense.m, I ran into a problem as follows:
2019-03-01 10-41-39

I find that it's because of the normal estimation, when estimating the normal, it will be out of memory if the data was larger than 50, it can only generate about 30 pictures in the normal estimation step. I am wondering about how to modify it, Could you please give me some suggestion? Thank you very much

Compilation fails: cannot find lhdf5

I am compiling this program on Ubuntu 18.04, and OpenGL is installed.

In depth2depth.cpp, I changed #include "hdf5.h" to #include "/usr/include/hdf5/serial/hdf5.h" to fix the failure of "cannot find hdf5.h".
Subsequently, the problem "cannot find lhdf5" occurs. How shoud I rewrite makefile to fix this? Thanks a lot!

Using 16-bit PNG

Hello,

I appreciate you releasing your work, and it is exciting to try them out.
However, I wonder what the reason behind multiplying 4000 to depth in mm is.
I have tried multiplying different values, and it still seems work.

Could you explain why you have chosen this specific value(4000) for multiplication?

Bests,
Stella

Occlusion && Surface normal

Hello,
I want to finish the depth completion of my own depth images. as I am new to this field. according to my understanding, I should get the estimated surface normal image, boundary image and occlusion weight image to generate the completion depth image, how to get the occlusion and surface normal? as I run the "main_test_bound_realsense.lua" to generate the boundary, the command as follows:
th main_test_bound_realsense.lua
but it failed, what's the matter? should I add some parameters?
I have also run the https://github.com/yindaz/surface_normal, I have a doubt that if I generate the surface normals using it, can I use the generated surface normals images straight to the deep completion as it use different model?
Thanks very much

depth2depth error: No hdf5.h file

Hi,
Thanks for your job! I found a problem when I compile the files in "gaps":

depth2depth.cpp fatal error: hdf5.h: No such file or directory
compilation terminated.

I am confused where can I find the "hdf5.h"? Thanks a lot!

Explanation regarding CreateTangentEquations function

Can you please explain the motivation behind these 2 lines of code in CreateTangetEquations? Specifically, why the multiplication of iy and iyA by the xres value? Thanks.

RNPolynomial d(1.0, (iy)*xres+(ix), 1.0);
RNPolynomial dA(1.0, (iyA)*xres+(ixA), 1.0);

Is it available to use 16:9 input image?

I have seen the #12 and I use the method of up sampling when I need higher resolution input image. But what should I do if I what to use 16:9 input image.Can I edit some parameters to solve this or the only way is to retrain the model? Thank you!

Dataset

Hello,
I am very interested in your dataset, I have download the agreement file of the SUNCG-RGBD, MatterPort3D, Scannet, print them and signatured on it(except the PI's name and PI's email, as I don't know the PI's meaning). I scanned the agreement files and sent them to the organizers, which it's nearly 4 days, but I get no response until now. could you please tell me is there any problem or I should wait patiently?
Thank you very much

Is unit of depth matters?

Hi,

I am trying to apply your noticed method on my own data using the pretrained model directly. I noticed that in the Readme file you have mentioned that the unit of input and output depth image value should be 4000 x depth in meter, can I use 1000 x depth in meter as input image, i.e 1000 in depth map when the real depth is 1 meter. will this different unit of input effect the performance of the pretrained model?

Convert pre-trained models to CPU before saving

I'm trying to load the pre-trained models (.t7 files) in PyTorch using https://github.com/clcarwin/convert_torch_to_pytorch. The error I run into points me to this issue in the same repo, suggesting that the issue is that the models were not converted to CPU before being saved.

In order to allow a broader community access to these models, I'd like to ask that you convert the models to CPU, re-save and provide these models as well.

My setup makes installing Lua and Torch quite cumbersome it seems, else I would have prepared these models and made a PR.

Thanks!

Datasets issues

I emailed the author for depth completion datasets, but I don't get any reply, what's happened?, do anyone know?

Forbidden access

Hello, I want to download the pretrained models using the links provided it the pre_train_model folder. However, get a permission error. Could you please tell me how should I download the models?

error

Get all the same data after depth2depth

I have compiled and run the code in scannet. But I couldn't get the correct improved depth image.
And all the value in the image is same. The output as follows:

../gaps/bin/x86_64/depth2depth /home/dsc/Downloads/scannet_test/scene0011_00/depth/000000002.png ../results/scannet/scene0011_00_000000002_suffix_1.png -xres 320.000000 -yres 240.000000 -fx 577.590698 -fy 578.729797 -cx 318.905426 -cy 242.683609 -inertia_weight 1000.000000 -smoothness_weight 0.001000 -normal_weight 1.000000 -input_normals ../torch/result/normal_scannet_scannet_test/scene0011_00_000000002_suffix_normal_est.h5 -input_normal_weight ../torch/result/bound_scannet_weight/scene0011_00_000000002_suffix_weight.png
Read image from /home/dsc/Downloads/scannet_test/scene0011_00/depth/000000002.png
Time = 0.01 seconds
Resolution = 640 480
Spacing = 1
Cardinality = 221228
Minimum = 0
Maximum = 1.00975
L1Norm = 89149.6
L2Norm = 199.071
Read images from ../torch/result/normal_scannet_scannet_test/scene0011_00_000000002_suffix_normal_est.h5
Time = 0.07 seconds
Resolution = 320 240
Cardinality = 76800
Minimum = -0.83625
Maximum = 0.77635
L1Norm = 11053.4
L2Norm = 95.9391
Read image from ../torch/result/bound_scannet_weight/scene0011_00_000000002_suffix_weight.png
Time = 0.00 seconds
Resolution = 320 240
Spacing = 1
Cardinality = 76066
Minimum = 0
Maximum = 1
L1Norm = 51001.6
L2Norm = 197.557
Read images ...
Time = 0.09 seconds
Resolution = 320 240

Images = 4

Solved for depth image
Time = 21.81 seconds

Variables = 76800

Equations = 1576920

Inertia Equations = 55313
Smoothness Equations = 306080
Derivative Equations = 0
Normal Equations = 909447
Tangent Equations = 306080
Range Equations = 0

Initial SSD = 5.24935e+10
Final SSD = 5.16869e+10
Wrote image to ../results/scannet/scene0011_00_000000002_suffix_1.png
Time = 0.01 seconds
Resolution = 320 240
Spacing = 1
Cardinality = 76800
Minimum = -0.333333 -<<<<<<<<<<<< this is the problem.
Maximum = -0.333083 -<<<<<<<<<<<<this is the problem.
L1Norm = -25600
L2Norm = 92.376

The data input have different Minimum and Maximum value, but the output doesn't.

I have check the source code, and find out that CreateDepthImage() can not work well.

how can I debug this?

Looking for a copy of data

Bad news to announce: Our technic staff accidentally deleted all the shared data related to this paper and repo. We are still in the process of trying to recover the data. Meanwhile, please let me ([email protected]) know if you happen to downloaded the data in whole or part. Thanks!

dataset issues

Hi Yindaz,

I try to sign agreement and download dataset , but the link http://pbrs.cs.princeton.edu/ redirect princeton.edu  ,suncg.cs.princeton.edu also redirect to princeton.edu . 

thanks a lot

Reproduce result & Evaluation on Matterport dataset

Hi,

I'm trying to reproduce the result on matterport dataset. However, I have encountered difficulties in it. The following lines are the script I ran. However, after I ran the script, I saw some blur depth images which is not the as clear as that in the paper and the evaluation is not the same. Could anyone tell me whether my script is incorrect ? Thanks !!!

cd('../torch/');

% Boundary detection
cmd = 'lua main_test_bound_matterport.lua -test_model ../pre_train_model/bound.t7 -test_file ./data_list/mp_test_list_horizontal.txt -root_path <my_root_path>';
system(cmd); % the result should be in ../torch/result/

% Surface normal estimation
cmd = 'lua main_test_matterport.lua -test_model ../pre_train_model/normal_matterport.t7 -test_file ./data_list/mp_test_list_horizontal.txt -root_path <my_root_path>';
system(cmd);

% cd('../matlab/');

% Get occlusion boundary (the 2nd channel) and convert to weight
GenerateOcclusionWeight('../torch/result/bound_matterport_test_bound/', '../torch/result/bound_matterport_weight/');

% Compose depth by global optimization
composeDepth('<my_root_path>', '../torch/result/normal_matterport_matterport_test', '../torch/result/bound_matterport_weight', 'mp_render', '../results/matterport', [1001, 0.001, 1]);

Port optimization code to python

Thanks for your work!
I want to port the global optimizer to python to use it for a similar task. Do you know of any python bindings for sparse solvers that I can use, or of any codebase already out there doing something similar?

One example of the train dataset

I'm preparing for training on my own data, but I don't know what is the correct format of the training example. Could you please post one example of the training data?
That is, surface normal part:

  1. color image
  2. mesh_nx.png
  3. mesh_ny.png
  4. mesh_nz.png
  5. raw_depth.png
  6. mesh_depth.png

occlusion boundary part:

  1. _mlt.png
  2. _valid.png
  3. _3d_boundary.png

Thanks for your excellent job!

Matterport3D Dataset

I have got the access to the original dataset, could you kindly provide your train and test dataset?

Input Format

Can you help specify the input format?
For depth, how do you specify the holes?
For normals, is each pixel of value (nx, ny, nz) \in [-1,1] ^3?
For occlusion boundary, how to you save the probability? You said it's saved as .png, but I suppose the probability is float while png files stores int.

Input image resolution

Hello,
as I want to get the output completion depth image in 640x480, now it's 320x240. I have searched all the files but not find where to modify the resolution, could you please tell me how to modify it into 640x480?
Thank you very much

Evaluation

Hello, I want to evaluate the performance of the depth completion of my own dataset. I know I should use the evalDepth.m file. the input is result_folder, dataset and suffix. I wonder what's the suffix? is it the test file in /torch/data_list as follow picture shows?
2019-01-10 10-15-05

I open the scannet_test_list_small.txt file, the content in it is as follows:
scene0144_01/data_dir/000000128_suffix
scene0678_02/data_dir/000001472_suffix
scene0643_00/data_dir/000000128_suffix
scene0580_00/data_dir/000003975_suffix
scene0591_01/data_dir/000000480_suffix
scene0606_01/data_dir/000001843_suffix
scene0011_01/data_dir/000001952_suffix
scene0609_00/data_dir/000000160_suffix
what does this mean in it? and I want to know how to generate the file to evaluate the performance?
Thank you very much!

evalDepth.m

@yindaz hello,
Could you please tell me what should I do if I want to evaluate on my own dataset result? in the evalDepth.m file, there are two lines as follows:
raw = testdata(a).raw;
gdt = testdata(a).gdt;

the raw is the original data, right? but what's the gdt? how to get it?
I have also download the "scannet_normal" and "matterport_normal" in the pre_calc_result folder, I wonder whether the pictures in the "scannet_normal" and "matterport_normal" folders are the result pictures after the DeepCompletion step?
Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.