charlescxk / depth2hha-python Goto Github PK
View Code? Open in Web Editor NEWUse python3 to convert depth image into hha image
License: MIT License
Use python3 to convert depth image into hha image
License: MIT License
Depth2HHA-python/utils/rgbd_util.py
Line 49 in 6c7c14b
According to this line, if yMin is higher than -90, the value will be overwritten with the value -130.
Is there any reason about this?
I happened to have height channel cap at 255 for upper half of image when I used with my synthetic dataset.
File "getHHA.py", line 67, in
hha = getHHA(camera_matrix, D, RD)
File "getHHA.py", line 26, in getHHA
pc, N, yDir, h, pcRot, NRot = processDepthImage(D * 100, missingMask, C);
File "/home/reshu/Desktop/Depth2HHA-python/utils/rgbd_util.py", line 18, in processDepthImage
X, Y, Z = getPointCloudFromZ(z, C, 1)
File "/home/reshu/Desktop/Depth2HHA-python/utils/rgbd_util.py", line 62, in getPointCloudFromZ
h, w= Z.shape
ValueError: too many values to unpack (expected 2)
I extracted depth images from the mat file of NUYDv2 official dataset. Here is my extraction code:
f=h5py.File( "nyu_depth_v2_labeled.mat") depths=f["depths"] depths=np.array(depths) depths = depths / max * 255 depths = depths.transpose((0,2,1))
As the official described, the depth values are in meters. When I extracted depth images, I didn't change units. I just normalized.
depths – HxWxN matrix of in-painted depth maps where H and W are the height and width, respectively and N is the number of images. The values of the depth elements are in meters.
Then I used your code to get HHA images. I remove the number 10000 because as I understand it, my depth images are already in meters.
D = cv2.imread(os.path.join(root, '000001.png'), cv2.COLOR_BGR2GRAY)
Unfortunately, I got a weird HHA image which was different from your result in demo.
Can you tell me what my problem is?
my HHA image
my depth image
It's difficult to know the algorithm from code. Could you please give some details about the implementation. Thank you very much!
Hello, I got an error when trying running your code.Here's the log:
(TF1.8) masaki@masaki-CP65R:~/Downloads/Depth2HHA-python-master$ cd /home/masaki/Downloads/Depth2HHA-python-master ; env /home/masaki/anaconda2/envs/TF1.8/bin/python /home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/launcher 40071 -- /home/masaki/Downloads/Depth2HHA-python-master/getHHA.py
('max gray value: ', 3)
Traceback (most recent call last):
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/main.py", line 45, in
cli.main()
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 267, in run_file
runpy.run_path(options.target, run_name=compat.force_str("main"))
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 252, in run_path
return _run_module_code(code, init_globals, run_name, path_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/masaki/Downloads/Depth2HHA-python-master/getHHA.py", line 71, in
hha = getHHA(camera_matrix, D, RD)
File "/home/masaki/Downloads/Depth2HHA-python-master/getHHA.py", line 30, in getHHA
pc, N, yDir, h, pcRot, NRot = processDepthImage(D * 100, missingMask, C)
File "/home/masaki/Downloads/Depth2HHA-python-master/utils/rgbd_util.py", line 35, in processDepthImage
1, C, np.ones(z.shape))
File "/home/masaki/Downloads/Depth2HHA-python-master/utils/rgbd_util.py", line 112, in computeNormalsSquareSupport
Z[ind] = np.nan
ValueError: cannot convert float NaN to integer
I edited the code in getPointCloudFromZ():
z3=Z.astype(np.float)
But the final image is not the same as your demo image. Can you tell me how to deal with it?
Thank you for your repo. I found that running your code is so slow. For a single depth image in NYUv2, it costs me half an hour to get its HHA on my Mac. I wonder if this is because of python or I mess something?
Thank you for sharing the tool.
Could LiDAR sparse depth (only 5-10% points on the image plane are valid) be converted to HHA depth map?
And if it could convert, will HHA map better for sparse depth feature extraction?
I'm trying to make a HHA image for other datasets that doesn't provide improved depth images such as SUNRGB-D, only raw ones. Can I get a recommendation of inpainting codes for improving raw Depth Image?
I also would like to ask would the result be far different if I use a raw depth image as an improved depth image to make a HHA image.
Thanks.
“The depth image array passed to function getHHA should in 'meter'. So in my demo code, I divide it with 10000 to modify the unit.” meter,10000 or 1000?
In your getHHA.py file, you have the line
I[:, :, 0] = (angle + 128 - 90)
where did the numbers 128 - 90 (=28) come from?
Hi, I want to use HHA to handle my own depth picture, but I want to ask that how can I get the camera matrix, I saw you import a package called "getCameraParam", can I download it online?
Thank you so much!
Hi. Thank you for sharing the code. After going through your code, I found two places where the implementation could be faster.
filterItChopOff
, you could replace signal.convolve2d
with cv2.filter2D
which is a faster implementation.rotatePC
, it is better to convert R
, which is originally the type of dtype('O')
, to dtype(np.float64)
, which is going to boost the performance of np.dot
.After the above two modifications, the execution time of getHHA.py
is ~2.5s compared to the original ~23s.
Hi,
A short question:
Could you explain what exactly you used for your depth and raw depth images? It seems that the depth image (say, 0.png) looks perfect while the raw depth image (0_raw.png) is a registered depth image from a sensor (Kinect for example) with some depth info lost (black shadow).
Thank you for your concern.
你好,感谢你的工作,我想问一下如果想尝试自己的数据集,类似于kitti这种数据集,距离在【0,1000】,除了相机内参,还有哪里需要修改的吗,需不需要做标准化,谢谢
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.