Coder Social home page Coder Social logo

uzh-rpg / rpg_open_remode Goto Github PK

View Code? Open in Web Editor NEW
485.0 61.0 187.0 4.44 MB

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper.

Home Page: http://rpg.ifi.uzh.ch/docs/ICRA14_Pizzoli.pdf

License: GNU General Public License v3.0

CMake 27.28% C++ 43.14% Cuda 29.59%

rpg_open_remode's Introduction

REMODE

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper

http://rpg.ifi.uzh.ch/docs/ICRA14_Pizzoli.pdf

The following video demonstrates the proposed approach:

http://youtu.be/QTKd5UWCG0Q

Disclaimer

The REMODE implementation in this repository is research code, any fitness for a particular purpose is disclaimed.

The code has been tested in Ubuntu 12.04, 14.04, 15.04, ROS Groovy, ROS Indigo and ROS Jade.

Licence

The source code is released under a GPLv3 licence.

http://www.gnu.org/licenses/

Citing

If you use REMODE in an academic context, please cite the following publication:

@inproceedings{Pizzoli2014ICRA,
  author = {Pizzoli, Matia and Forster, Christian and Scaramuzza, Davide},
  title = {{REMODE}: Probabilistic, Monocular Dense Reconstruction in Real Time},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  year = {2014}
}

Install and run REMODE

The wiki

https://github.com/uzh-rpg/rpg_open_remode/wiki

contains instructions on how to build and run REMODE.

NOTE: this implementation requires a CUDA capable GPU and the NVIDIA CUDA Toolkit

https://developer.nvidia.com/cuda-zone

Acknowledgments

The author acknowledges the key contributions by Christian Forster, Manuel Werlberger and Jeff Delmerico.

Also, thanks to Michael Gassner, Zichao Zhang and Henri Rebecq for their valuable comments and help.

Contributing

You are very welcome to contribute to REMODE by opening a pull request via Github. I try to follow the ROS C++ style guide http://wiki.ros.org/CppStyleGuide

rpg_open_remode's People

Contributors

mwerlberger avatar pizzoli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpg_open_remode's Issues

Integration with ORB SLAM 2 - Low convergence

Hey there,

interesting project. I am currently trying to combine remode and ORB SLAM 2. It is all communicating as expected, at least remode gets the images with pose and depth, however the result does not look like it should (see below).
Now it is hard for me to identify, what might cause this. Calibration is obviously okay, because ORB_SLAM has no problem with it and triangulates like a pro. I have quite low frame rate, does remode need higher framerates (currently 1 Hz) ? Any tipps?

Thanks and best regards,

Alex

auswahl_107

getting eigen runtime error

I am getting a eigen runtime error , I have egain
#define EIGEN_WORLD_VERSION 3
#define EIGEN_MAJOR_VERSION 3
#define EIGEN_MINOR_VERSION 90

any idea what to do?

[ INFO] [1506664102.694889432]: SVO initialized
vo: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:128: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 32>::plain_array() [with T = double; int Size = 4; int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(eigen_unaligned_array_assert_workaround_gcc47(array)) & (31)) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.

remode/depth - no new messages, not able to visualize for live stream.

I had calibrated the camera, according to the documentation, and included

in the SVO launch file but not able to visualize anything. I think the problem is the remode/depth topic shows - no new messages. I made changes to the camera_calibration file by calibrating the camera using the pinhole model. This **rosrun image_view image_view image:=/remode/depth** show a window but greys out showing nothing. I am attaching some screenshots if someone could help me out.

screenshot from 2018-10-12 04-41-46
screenshot from 2018-10-12 04-42-08
screenshot from 2018-10-12 04-42-18
screenshot from 2018-10-12 04-42-57

CUDA driver version is insufficient for CUDA runtime version

when run the test following error encountered.
./all_tests
Running executable: ./all_tests
Checking available CUDA-capable devices...
ERROR: cudaGetDeviceCount CUDA driver version is insufficient for CUDA runtime version

I am currently using NVIDIA-SMI 340.93, Driver Version: 340.93 and Cuda 7.0 on Linux Mint 17.2 Cinnamon 64-bit. So which version of CUDA driver version is expected? Thanks.

Extend REMODE to support fisheye camera

Hi @pizzoli , thanks for your great project.

I wanted to extend the REMODE to support the fisheye camera / omnidirectional camera. Is this possible?
I'm thinking of using the cv::omnidir provided by opencv for the camera calibration (CMei's model).
For the epipolar matching and triangulation, what kind of methods do you suggest?

Camera calibration parameters

There was a previous question asking for the definition of the parameters r1, r1.

It seems to have been marked as duplicate without providing a reference to said duplicate?
#17

Is the calculation of ncc wrong in the code?

In the epipolar_match.cu file, why is NCC calculated as
const float ncc = ncc_numerator * rsqrtf(ncc_denominator + FLT_MIN);
instead of
const float ncc = ncc_numerator /rsqrtf(ncc_denominator + FLT_MIN); ?

Error running the SVO - REMODE pipeline on a ROS bag file

Hi!

I am trying to run the example in Run using SVO but I get the two following message errors with the roslaunch px4_2.launch in the svo_ros package:

  • the first one once the .bag file is playing:
    error1

  • the second one when I try once again after having the first error, just by launching px4_2.launch in the svo_ros package, I immediately get the following error:
    erroreigenrepro

Do you have any solutions/explanations to both errors?

Implementation in video

Hi, was wondering if this implementation is the one used in the the video in the readme. Thanks!

Holiday

eat pizza and enjoy the beach of rome! ;-)

compile without CUDA

I was looking for any possibility for non realtime 3D Reconstruction without CUDA support for this repo.

Noisy cloud and low convergence rate

Hi, I am setting up an UAV with a ZED camera and a Jetson tx1. I am testing REMODE with a custom ZED wrapper that publish SVO message to fit to the ros interface. Till now everything seems to be ok, but the cloud I am getting is very noisy, and the convergence rate never grows up more than ~21.6%.
I uploaded a gif my results. I decreased the convergence threshold to 20%. Does anyone have any clue of what could be my error?

5a574bef39a0d496223361

Thanks in advance

Publish pcl as latched topic

You might want to set the combined point cloud to publish as a latched topic so that the most recently published one is always available in rviz (as suggested by Jeff).

CudaException: Unable to bind texture

I have successfully built the program, but when I run "dataset_main" it terminate with following error:
terminate called after thjrowing an instance of 'rmd::CudaException'
what(): CudaException: Unable to bind texture:
cudaError code: invalid texture reference (18)

In addition, "all_test" only pass for first test, it failed on the others for for similar reason as above. My system is Ubuntu 14.04 with Cuda 7.0 (GTX980)

Thanks for any advice!

Can remode get correct 3d points from SfM?

Hi,
Currently, I use pose from SfM pipeline ,and plan to use remode get dense 3d point map. Can remode get correct 3d points from SfM? I read your code ,and can't find the place where inverse depth was implemented. Dose inverse depth implementation have great improvement on precision?

Can not get remode_test_data.zip

$ wget http://rpg.ifi.uzh.ch/datasets/remode_test_data.zip
--2017-07-19 14:24:16--  http://rpg.ifi.uzh.ch/datasets/remode_test_data.zip
Resolving rpg.ifi.uzh.ch... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address 'rpg.ifi.uzh.ch'

About the result?

I have succeed with my own input data. The result is quite nice I think. Really thank you for your great work on this project.
I am new to ROS and I don't know in the result file.
(1) I saved all the results with "rosbag record -a" which save all the information while running.
(2) I can export the point cloud to many .pcd files with " rosrun pcl_ros bag_to_pcd myfile.bag /remode/poincloud ./".

(3) This is my question?
What I am interested in is the information containing in the bag file. Mainly are depth map, the images which correspond to the depth map. Are they contained in the bag file? I checked the rosbag with "rosbag info mybag" and I see there are 17 /remode/depth. But I didn't see 17 correspond images.

And by the way, I see the video on youtube. The final step there is a denoising process. It says "denoising by depth map uncertainty". Can you give a reference to this method.

Depth estimation

Hi @pizzoli,

Your work is very impressive!

I would like to use other pose estimation method to pass the camera pose and location to you remode. However, I noticed that except from the pose and location, the message from SVO also pass the depth map to remode. And remode create a reference map based on the max and min depth. Can you please let me know how to set these values if I cannot estimate a initial depth from other pose estimation method.

Looking forward to your kind reply.

Using REMODE with Colour

Hi,
All our implementations have been monochrome. Is there is a way to make it colour like in the YouTube video?
Thanks.

REMODE build error

I followed the instruction and tried to build REMODE without ROS. Everything was fine including running "cmake -DGTEST_ROOT=$MY_WORKSPACE/googletest/install -DBUILD_ROS_NODE=OFF ..". But the last step "make" failed because of errors.

The following warning appeared many times.

/Home/Remode/rpg_open_remode/include/rmd/device_image.cuh:131:73: warning: throw will always call terminate() [-Wterminate]
throw CudaException("Image: unable to free allocated memory.", err);

Could you help me to solve this problem? Thank you!

which way to use Kinect camera?

Hi all,

Thanks for your code.
I solve to run with ROS Kinetic, but now I have the problem of not having good map while using live stream from Kinect.
The 3D reconstruction is not good and generally it gives just a small part of the environment.

So, do I use the same calibration file for Kinect camera.
And, how can I get a good reconstruction of an environment?

Thx,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.