sylqiu / tof_rgbd_processing Goto Github PK
View Code? Open in Web Editor NEWOff-the-shelf deep alignment and refinement for weakly calibrated ToF RGB-D modules
Off-the-shelf deep alignment and refinement for weakly calibrated ToF RGB-D modules
when i compile warp_by_flow int tf1.15 with cuda10.0, it remind me the error
#include "tensorflow/core/util/cuda_kernel_helper.h" not exist
it seems that tf1.15 had remove it, did you test it in any other tf version
Hi, sylqiu,
recently, I am learning your repository, when I try to running your code, I received the error:TypeError: Expected list for 'values' argument to 'Pack' Op, not range(0, 512).
I set the main parameter input as: --dataset_name nogt --is_training 0
and I think this means I use test mode, So I changed the TestingConfig:
self.wlast = './full_model_checkpoint/checkpoint/ckpt'
self.path = '/home/tof_rgbd_processing-master/fullmodel/gt_depth_rgb',
I am really confused what's wrong in my code, may you give me some suggestions?
Hi Sylqiu
Sorry for bothering you.
It mentions 400 scenes 640x480 real data by Panasonic ToF and RGB sensor in Chapter 4.2-Real Data Collection in your paper(Deep End-to-End Alignment and Refinement for Time-of-Flight RGB-D Module).
Could you please share your real dataset ?
Thanks.
I have followed your instruction, but I was not able to get 256 transient images (i rather got single *.exr file)
/DirToPbrt/pbrt /DirToToF/ToF-pbrt-generation/scenes/ToFFlyingThings3D/breakfast/more.pbrt
and I got single "breakfast.exr" file in /DirToPbrt/
Also I tried to use "batch_run_pbrt.sh" but I was not able to run it as I have no clue about some variables to set like
CAMFILE (I used ToF-pbrt-generation/scenes/ToFFlyingThings3D/breakfast/campath1000.txt (or campath1000_z.txt)) , MAT_PATH (I have no idea),
SCENE_LIST (also have no idea) etc
can I get more detailed instruction of way to get 256 transient images?
Hi, sylqiu:
I have read your paper, in your paper compapred with DeepToF is given ,however the input of the ToF-KPN is RGB image ใ wraped ToF amplitude and wraped ToF depth image, when you train a ToF-KPN without RGB image , how to get wraped ToF amplitude and wraped ToF depth image ?
In your repo, you mention that the major files in repo is organized as follows:
|- blender_utils
|-- export_path.py # export camera locations in Blender's Timeline, should be run inside of Blender #
|-- output_zpass.py # python script for writing ground truth depth #
|-- lighting_multiple_output # further applications for use of python in blender #
|- pbrt-v3-tof
|-- example
|-- batch_run_pbrt.sh # pbrt rendering example #
|- transient_processing
|-- example
|-- transient_to_depth.m # MatLab script for converting transient rendering into ToF correlation measurements and ToF depth images #
|- pbrt_material_augmentation
|-- exanple
|-- output_materail.m # MatLab script for writing material library .pbrt files #
|- scenes # 3D models and camera paths #
But I can't find where are them.
camera_util.py seems to expect a calib.bin file, but there doesn't seem to be one in the repo or in the dataset available for download. Can you provide a copy of this file?
Thanks!
Thank you for uploading a sample test image. I'm trying to get it to run, but am running into another issue.
Traceback (most recent call last): File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1139, in _do_call return fn(*args) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1121, in _run_fn status, run_metadata) File "/home/ubuntu/anaconda3/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'i_D_1' with dtype float and shape [1,384,512,1] [[Node: i_D_1 = Placeholder[dtype=DT_FLOAT, shape=[1,384,512,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: Reshape_11/_333 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1171_Reshape_11", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
For reference, I'm running fullmodel.py, with dataset_name nogt and is_training 0.
I had to make a few small code changes to get it running, so it's possible I broke something in the process. Here are the changes I made:
In fullmodel.py, line 173, change "if self.use_fifo = True" to "if self.use_fifo == True" (add double equals)
In fullmodel.py, line 102, change "self.testing_config = TestingConfig()" to "self.testing_config = TestingConfig(dataset_name)", because TestingConfig's init method requires a dataset_name.
In fullmodel.py, line 490, change to "R, L, fD = self.sess.run([self.R, self.L, self.filtered_D])" because self.refineflow_EPE and self.roughflow_EPE are only created if ground truth is available.
In loaders.py, lines 26 and 27, remove dictionary entries for flatd and flat, because FLATD and FLAT seem to be undefined.
Thanks again for your help with this!
Hi,Di Qiu:
I am using the code which can generate the transient images and ground truth data. how ever when I repeate it in my machine, some problems occurs.
i am using the command line 'pbrt -**.pbrt' to generate the transient images of a special camera position. and using the command line'blender -b **.blend --python run_zpass_camera1_more.py' to generate the grund truth data. however when i am compared with your ToFFlytings3D dataset, some questions is here:
1:compare the transient images results with your GT data: I generarte the 20M depth using single frequency ,however my 20M depth is much larger than your data(in your .mat GT data, GT42/4095 to real depth), and the ration between your data and my 20M data is 1.4, it means that my 20M depth = 1.4 * your GT data, i don't known why this happened.
2: compare my GT data with your GT data: when compare my GT data with your GT data,some questions here, firstly i found the ration between my GT data and your GT data is about 12.5,secondly when i subtract 12.5*your GT data from my GT data ,a shift occured. you can see in this picture.
best regards
thanks for ur work, but I can only find the invflow data in gt_depth_rgb_test_small_pt, how can I get the invflow data in the folder gt_depth_rgb @sylqiu
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.