Coder Social home page Coder Social logo

Comments (5)

tangketan avatar tangketan commented on September 26, 2024

For stereoBM, the maxDisparity=64, other parameters are the same as OpenCV's default. No speckle filter is used.

Yes, the output grayscale images are already calibrated. That's why we don't provide distortion coefficients.

from guidance-sdk.

uwleahcim avatar uwleahcim commented on September 26, 2024

The default parameter for opencv stereoBM is numDisparities=0 and blockSize=21 via StereoBM::create

maxDisparity = numDisparities - minDisparties correct?

By default the blockSize is much larger than what the guidance system output. I had to tune blockSize value down to 9 to get it resemble the guidance depth output.

Ptr<StereoBM> sbm = StereoBM::create(64, 9);
smb->compute(left, right, depth);

Also, the output of stereoBM is a disparity map. Thus the colors should be inverse of the depth map.

In order to generate the depth map from disparity map I need to perform the following calculation.

depth = baseline * focal_length / disparity

depth should be a int16, so what should the units of baseline and focal_length used for the calculation? What are the units of the depth image (mm?).

If the units are in mm will my equation look like this? (247.35mm focal length and 15cm baseline)

depth = 150 * 247 / disparity

Anyways, here is the depth image I generated with the following code:

Ptr<StereoBM> sbm = StereoBM::create(64, 9);
sbm->compute(g_greyscale_image_left, g_greyscale_image_right, g_depth);
g_depth = (247.35 * 150)/g_depth;
g_depth.convertTo(depth8, CV_8UC1);
imshow(frame_id[CAMERA_PAIR_INDEX] + " depth opencv", depth8);

Opencv depth
front_guidance depth opencv_screenshot_24 06 2016

Guidance depth
front_guidance depth_screenshot_24 06 2016

Guidance image
front_guidance left_screenshot_24 06 2016

Seems like the scaling is just off a bit.

from guidance-sdk.

uwleahcim avatar uwleahcim commented on September 26, 2024

When just compare the disparity maps, the images looks almost exactly the same. Now the issue is how is the guidance core computing depth map from the disparity map?

Also, when I pull disparity (either generated from opencv or guidance core) and depth from opencv (calculated based on depth = 150 * 247 / disparity) from the publish topic. The image is computer blown out white. This isn't the case with depth generated by the guidance core.

The following images were published by guidanceNode then pulled from topic via guidanceNodeTest

These depth images were generated by guidance core
back depth_screenshot_27 06 2016
front depth_screenshot_27 06 2016

These depth images were generated by opencv
left depth_screenshot_27 06 2016
right depth_screenshot_27 06 2016

from guidance-sdk.

uwleahcim avatar uwleahcim commented on September 26, 2024

Just an update.

I'm an idiot and completely forgot about the fractional bits. Both the disparity map generated by the guidance core and opencv StereoBM are 16-bit with 4-bit fractions. Thus, one has to divide the raw disparity data by 16 to convert to floating points.

Also, the units of depth map is in meters and not mm.

Now according to guidance documentation, the depth image also has fractional bits, but it is 7-bit. Thus to match the guidance core depth image output, you have to multiple the floating point depth image by 128.

For anyone using ROS, the standard for depth images is meters in floating points. Thus you'll have to divide the guidance core's depth by 128 first to convert it into float points.

from guidance-sdk.

tangketan avatar tangketan commented on September 26, 2024

Thank @uwleahcim for the explanations. Yes, Guidance has 7-bit fractions. We have given an example of displaying real depth value at the center of image in usb_example/main.cpp.

from guidance-sdk.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.