Coder Social home page Coder Social logo

pedropro / omg_depth_fusion Goto Github PK

View Code? Open in Web Editor NEW
60.0 7.0 17.0 218.67 MB

Probabilistic depth fusion based on Optimal Mixture of Gaussians for depth cameras

License: MIT License

CMake 2.45% C++ 97.55%
depth-fusion kinect-fusion gaussian-mixture-models depth-estimation depth-camera depth-image

omg_depth_fusion's Introduction

Optimal Mixture of Gaussians for Depth Fusion

Implementation of the filter proposed in:
P.F. Proença and Y. Gao, Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty, Robotics and Autonomous Systems, 2018

The OMG filter is aimed at denoising and hole-filling the depth maps given by a depth sensor, but also (more importantly) capturing the depth uncertainty though spatio-temporal observations, which proved to be useful as an input to the probabilistic visual odometry system proposed in the related paper.

The code runs the OMG on RGB-D dataset sequences (e.g. ICL_NUIM and TUM). Some examples are included in this repository. This is a stripped down version: It does not include the VO system, thus instead of using the VO frame-to-frame pose, we rely on the groundtruth poses to transform the old measurements (the registered point cloud) to the current frame. Also, the pose uncertainty condition was removed. The sensor model employed is specific to Kinect v1. Thus, to use with other sensors (e.g. ToF cameras), this should be changed.

Dependencies

  • OpenCV
  • Eigen3

Ubuntu Instructions

Tested with Ubuntu 14.04

To compile, inside the directory ./OMG type:

mkdir build
cd build
cmake ../
make

To run the executable type:

./test_OMG 4 9 TUM_RGBD sitting_static_short

or

./test_OMG 4 9 ICL_NUIM living_room_traj0_frei_png_short

Windows Instructions

Tested configuration: Windows 8.1 with Visual Studio 10 & 12

This version includes already a VC11 project. Just make the necessary changes to link the project with OpenCV and Eigen.

Usage

General Format ./test_OMG <max_nr_frames> <consistency_threshold> <dataset_directory> <sequence_name>

  • max_nr_frames: is the window size - 1, i.e., the maximum number of previous frames used for fusion.
  • consistency threshold: is used to avoid fusing inconsistent measurements, if the squared distance between a new measurement and the current estimate is more than current uncertainty times this threshold than the new measurement is ignored. Setting this higher may produce better quality but will capture less the temporal uncertainty.

Three windows should pop up, showing the raw depth, the fused depth and the respective fused uncertainty.

Data

Two short RGB-D sequences are included as examples. To add more sequences, download from: TUM_RGBD or ICL-NUIM and place them inside the directory ./Data the same way it has been done for the examples. Then add the respective camera parameters in the same format as the examples as calib_params.xml

omg_depth_fusion's People

Contributors

pedropro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

omg_depth_fusion's Issues

about the output depth image

hello @pedropro ,your project is amazing ,but i have aquestion about it.i want to save the ouput depth image,but i found the output depth is different with the original one.
the original one:
1341846314 357937

the output ond:
test
could you please help me..how to get a similiar output depth with the original one.
because i want to use the output depth fusion to run rgbd-slam, enhancing its perfomance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.