Coder Social home page Coder Social logo

depth2hha-python's People

Contributors

charlescxk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

depth2hha-python's Issues

algorithm details

It's difficult to know the algorithm from code. Could you please give some details about the implementation. Thank you very much!

Unit conversion

“The depth image array passed to function getHHA should in 'meter'. So in my demo code, I divide it with 10000 to modify the unit.” meter,10000 or 1000?

Depth and raw depth image

Hi,
A short question:
Could you explain what exactly you used for your depth and raw depth images? It seems that the depth image (say, 0.png) looks perfect while the raw depth image (0_raw.png) is a registered depth image from a sensor (Kinect for example) with some depth info lost (black shadow).

Thank you for your concern.

Convert LiDAR sparse depth?

Thank you for sharing the tool.

Could LiDAR sparse depth (only 5-10% points on the image plane are valid) be converted to HHA depth map?
And if it could convert, will HHA map better for sparse depth feature extraction?

Suggestions for a 10x faster implementation.

Hi. Thank you for sharing the code. After going through your code, I found two places where the implementation could be faster.

  1. In the function filterItChopOff, you could replace signal.convolve2d with cv2.filter2D which is a faster implementation.
  2. In the function rotatePC, it is better to convert R, which is originally the type of dtype('O'), to dtype(np.float64), which is going to boost the performance of np.dot.

After the above two modifications, the execution time of getHHA.py is ~2.5s compared to the original ~23s.

getHHA file angle question

In your getHHA.py file, you have the line

I[:, :, 0] = (angle + 128 - 90)

where did the numbers 128 - 90 (=28) come from?

ValueError: cannot convert float NaN to integer

Hello, I got an error when trying running your code.Here's the log:

(TF1.8) masaki@masaki-CP65R:~/Downloads/Depth2HHA-python-master$ cd /home/masaki/Downloads/Depth2HHA-python-master ; env /home/masaki/anaconda2/envs/TF1.8/bin/python /home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/launcher 40071 -- /home/masaki/Downloads/Depth2HHA-python-master/getHHA.py
('max gray value: ', 3)
Traceback (most recent call last):
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/main.py", line 45, in
cli.main()
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/masaki/.vscode/extensions/ms-python.python-2020.5.78807/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 267, in run_file
runpy.run_path(options.target, run_name=compat.force_str("main"))
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 252, in run_path
return _run_module_code(code, init_globals, run_name, path_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/home/masaki/anaconda2/envs/TF1.8/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/masaki/Downloads/Depth2HHA-python-master/getHHA.py", line 71, in
hha = getHHA(camera_matrix, D, RD)
File "/home/masaki/Downloads/Depth2HHA-python-master/getHHA.py", line 30, in getHHA
pc, N, yDir, h, pcRot, NRot = processDepthImage(D * 100, missingMask, C)
File "/home/masaki/Downloads/Depth2HHA-python-master/utils/rgbd_util.py", line 35, in processDepthImage
1, C, np.ones(z.shape))
File "/home/masaki/Downloads/Depth2HHA-python-master/utils/rgbd_util.py", line 112, in computeNormalsSquareSupport
Z[ind] = np.nan
ValueError: cannot convert float NaN to integer

I edited the code in getPointCloudFromZ():
z3=Z.astype(np.float)
But the final image is not the same as your demo image. Can you tell me how to deal with it?
hha

ValueError: too many values to unpack (expected 2)

File "getHHA.py", line 67, in
hha = getHHA(camera_matrix, D, RD)
File "getHHA.py", line 26, in getHHA
pc, N, yDir, h, pcRot, NRot = processDepthImage(D * 100, missingMask, C);
File "/home/reshu/Desktop/Depth2HHA-python/utils/rgbd_util.py", line 18, in processDepthImage
X, Y, Z = getPointCloudFromZ(z, C, 1)
File "/home/reshu/Desktop/Depth2HHA-python/utils/rgbd_util.py", line 62, in getPointCloudFromZ
h, w= Z.shape

ValueError: too many values to unpack (expected 2)

Should missingMask == 0 instead of 1?

Hi, under computeNormalsSquareSupport function, I was wondering if we should compare msisingMask to 0 instead of 1? Because the values at the borders and missing depths are 0.

A sample image of the NYU0003_0000.png (rawDepth)
NYU0003_0000_rawDepths

自己的数据集,需不需要做标准化

你好,感谢你的工作,我想问一下如果想尝试自己的数据集,类似于kitti这种数据集,距离在【0,1000】,除了相机内参,还有哪里需要修改的吗,需不需要做标准化,谢谢

Speed is quite slow

Thank you for your repo. I found that running your code is so slow. For a single depth image in NYUv2, it costs me half an hour to get its HHA on my Mac. I wonder if this is because of python or I mess something?

Different result for the same picture

I extracted depth images from the mat file of NUYDv2 official dataset. Here is my extraction code:
f=h5py.File( "nyu_depth_v2_labeled.mat") depths=f["depths"] depths=np.array(depths) depths = depths / max * 255 depths = depths.transpose((0,2,1))

As the official described, the depth values are in meters. When I extracted depth images, I didn't change units. I just normalized.

depths – HxWxN matrix of in-painted depth maps where H and W are the height and width, respectively and N is the number of images. The values of the depth elements are in meters.

Then I used your code to get HHA images. I remove the number 10000 because as I understand it, my depth images are already in meters.
D = cv2.imread(os.path.join(root, '000001.png'), cv2.COLOR_BGR2GRAY)
Unfortunately, I got a weird HHA image which was different from your result in demo.
Can you tell me what my problem is?
image
my HHA image
image
my depth image

about package and camera matrix

Hi, I want to use HHA to handle my own depth picture, but I want to ask that how can I get the camera matrix, I saw you import a package called "getCameraParam", can I download it online?

Thank you so much!

What if I don't have the Depth Image?

I'm trying to make a HHA image for other datasets that doesn't provide improved depth images such as SUNRGB-D, only raw ones. Can I get a recommendation of inpainting codes for improving raw Depth Image?

I also would like to ask would the result be far different if I use a raw depth image as an improved depth image to make a HHA image.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.