Coder Social home page Coder Social logo

tof_rgbd_processing's Issues

error when compile warp_by_flow

when i compile warp_by_flow int tf1.15 with cuda10.0, it remind me the error
#include "tensorflow/core/util/cuda_kernel_helper.h" not exist
it seems that tf1.15 had remove it, did you test it in any other tf version

TypeError: Expected list for 'values' argument to 'Pack' Op, not range(0, 512).

Hi, sylqiu,
recently, I am learning your repository, when I try to running your code, I received the error:TypeError: Expected list for 'values' argument to 'Pack' Op, not range(0, 512).
I set the main parameter input as: --dataset_name nogt --is_training 0
and I think this means I use test mode, So I changed the TestingConfig:
self.wlast = './full_model_checkpoint/checkpoint/ckpt'
self.path = '/home/tof_rgbd_processing-master/fullmodel/gt_depth_rgb',
I am really confused what's wrong in my code, may you give me some suggestions?

Could you please share your real dataset (400 scenes)?

Hi Sylqiu
Sorry for bothering you.
It mentions 400 scenes 640x480 real data by Panasonic ToF and RGB sensor in Chapter 4.2-Real Data Collection in your paper(Deep End-to-End Alignment and Refinement for Time-of-Flight RGB-D Module).
Could you please share your real dataset ?
Thanks.

rendering with pbrt

I have followed your instruction, but I was not able to get 256 transient images (i rather got single *.exr file)

  • I downloaded ToF-pbrt-generation and installed pbrt-v3-tof
  • when I run pbrt, I used following command

/DirToPbrt/pbrt /DirToToF/ToF-pbrt-generation/scenes/ToFFlyingThings3D/breakfast/more.pbrt

and I got single "breakfast.exr" file in /DirToPbrt/

Also I tried to use "batch_run_pbrt.sh" but I was not able to run it as I have no clue about some variables to set like
CAMFILE (I used ToF-pbrt-generation/scenes/ToFFlyingThings3D/breakfast/campath1000.txt (or campath1000_z.txt)) , MAT_PATH (I have no idea),
SCENE_LIST (also have no idea) etc

can I get more detailed instruction of way to get 256 transient images?

Questions about only train ToF-KPN

Hi, sylqiu:
I have read your paper, in your paper compapred with DeepToF is given ,however the input of the ToF-KPN is RGB image ใ€ wraped ToF amplitude and wraped ToF depth image, when you train a ToF-KPN without RGB image , how to get wraped ToF amplitude and wraped ToF depth image ?

Where are major files for Synthetic ToF Dataset Generation Using Blender and PBRT?

In your repo, you mention that the major files in repo is organized as follows:

|- blender_utils
|-- export_path.py # export camera locations in Blender's Timeline, should be run inside of Blender #
|-- output_zpass.py # python script for writing ground truth depth #
|-- lighting_multiple_output # further applications for use of python in blender #

|- pbrt-v3-tof
|-- example
|-- batch_run_pbrt.sh # pbrt rendering example #

|- transient_processing
|-- example
|-- transient_to_depth.m # MatLab script for converting transient rendering into ToF correlation measurements and ToF depth images #

|- pbrt_material_augmentation
|-- exanple
|-- output_materail.m # MatLab script for writing material library .pbrt files #

|- scenes # 3D models and camera paths #

But I can't find where are them.

Missing calib.bin

camera_util.py seems to expect a calib.bin file, but there doesn't seem to be one in the repo or in the dataset available for download. Can you provide a copy of this file?

Thanks!

Missing value for placeholder i_D

Thank you for uploading a sample test image. I'm trying to get it to run, but am running into another issue.

Traceback (most recent call last): File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1139, in _do_call return fn(*args) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1121, in _run_fn status, run_metadata) File "/home/ubuntu/anaconda3/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'i_D_1' with dtype float and shape [1,384,512,1] [[Node: i_D_1 = Placeholder[dtype=DT_FLOAT, shape=[1,384,512,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: Reshape_11/_333 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1171_Reshape_11", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

For reference, I'm running fullmodel.py, with dataset_name nogt and is_training 0.

I had to make a few small code changes to get it running, so it's possible I broke something in the process. Here are the changes I made:

In fullmodel.py, line 173, change "if self.use_fifo = True" to "if self.use_fifo == True" (add double equals)
In fullmodel.py, line 102, change "self.testing_config = TestingConfig()" to "self.testing_config = TestingConfig(dataset_name)", because TestingConfig's init method requires a dataset_name.
In fullmodel.py, line 490, change to "R, L, fD = self.sess.run([self.R, self.L, self.filtered_D])" because self.refineflow_EPE and self.roughflow_EPE are only created if ground truth is available.
In loaders.py, lines 26 and 27, remove dictionary entries for flatd and flat, because FLATD and FLAT seem to be undefined.

Thanks again for your help with this!

mismatch of my data and your ToFFlythins3D

Hi,Di Qiu:
I am using the code which can generate the transient images and ground truth data. how ever when I repeate it in my machine, some problems occurs.
i am using the command line 'pbrt -**.pbrt' to generate the transient images of a special camera position. and using the command line'blender -b **.blend --python run_zpass_camera1_more.py' to generate the grund truth data. however when i am compared with your ToFFlytings3D dataset, some questions is here:
1:compare the transient images results with your GT data: I generarte the 20M depth using single frequency ,however my 20M depth is much larger than your data(in your .mat GT data, GT42/4095 to real depth), and the ration between your data and my 20M data is 1.4, it means that my 20M depth = 1.4 * your GT data, i don't known why this happened.
2: compare my GT data with your GT data: when compare my GT data with your GT data,some questions here, firstly i found the ration between my GT data and your GT data is about 12.5,secondly when i subtract 12.5*your GT data from my GT data ,a shift occured. you can see in this picture.
6dc1134dd8183711acc04f35a286fd6

best regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.