marianojt88 / joint-vo-sf Goto Github PK
View Code? Open in Web Editor NEWFast Odometry and Scene Flow from RGB-D Cameras based on Geometric Clustering
Fast Odometry and Scene Flow from RGB-D Cameras based on Geometric Clustering
The GitHub repository does not contain the 'joint_vo_sf.cpp' file which contains basically all the code needed to run this visual odometry code.
There seem to be a few dependencies which are required by the build, but not described in the README. Namely, Intel TBB (apt install libtbb-dev
) and Eigen (personally using a local install autodetected by CMake).
Apart from that, everything seems to work smoothly. Actually, for the "Freiburg3/walk xyz" sequence, using all the default parameters from this repo, I get a mean processing time of ~29ms, over the entire sequence (STD ~5.7ms) which is considerably faster than the 80ms specified in the paper, provided I'm not doing anything wrong. (28.8ms @ 3.4ms std for walk static)
I'm on Ubuntu 16.04, and my CPU is an Intel Core i7-6700k, so good, but nothing too ridiculous.
Edit: I just double checked the paper, and it seems the experiments were run on a laptop Core i7-4712 HQ, so the speedup is not too surprising, but still nice to see!
Keep up the great work!
After reading your paper, I still do not know your whole algorithm. I just know that your algorithm is roughly divided into three parts. The first part of the K-means clustering objects and the second part of the M-estimator to calculate the photometric and geometric residuals, and then dynamic object segmentation, the third part has a feedback parameter B for the next segmentation , could you simply introduce what algorithm is used,thank you !
I've read your paper"Smooth Piece-Wise Rigid Scene Flow from RGB-D Images.", MC-flow performance is better than this way , but it's slow, and I think that the different because of layer optimization ,which is"smooth Piece-Wise Rigid Scene Flow from RGB-D Images" use Primal-Dual algorithm ,but this article is based on the optimization of the four energy functions. whether if my understanding is wrong,Can you tell me some details ? ,Thank you !
I want to debug this project and get some implementation details, so I modified the CMakeList.txt:
-SET(CMAKE_BUILD_TYPE "Release")
-SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3 -mtune=native -mavx")
+SET(CMAKE_BUILD_TYPE "Debug")
+SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -mtune=native -mavx")
But in this situation,it always throw an assertion on unaligned arrays.
VO-SF-ImagePair: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:128: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 32>::plain_array() [with T = float; int Size = 16; int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(eigen_unaligned_array_assert_workaround_gcc47(array)) & (31)) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.
Aborted (core dumped)
I follow the aboving link, but I still cant find where the problem is.
Thanks for your excellect work firstly.
I want to show my own sceneflow reslult with your visualization tools. However, it shows different from real sceneflow result.
The range of my sceneflow values and depth values are both normal. But the length of sceneflow is abnormal.
After my double check, i find that when i scale my depth value, it performs well.
I guess the depth.convertTo(depth_float, CV_32FC1, 1.0 / 500.0)
is important for me. My depth map is obtained by sgbm and with unit of mm.
Looking for your early reply, thank you again for your excellent work.
hello, i download the dataset "rawlog_rgbd_dataset_freiburg3_walking_xyz",and put it in the file "data".
Then in the main_vo_sf_datasets.cpp, i replace
dataset.filename = ".../rawlog_rgbd_dataset_freiburg1_desk/rgbd_dataset_freiburg1_desk.rawlog";
with
dataset.filename = "./data/rawlog_rgbd_dataset_freiburg3_walking_xyz/rgbd_dataset_freiburg3_walking_xyz.rawlog";
after building , i run
build/VO-SF-Datasets
The MRPT window is empty!
(but it works when i run build/VO-SF-ImagePair)
Hi,
I met problem on building TBB in windows 10 environment in VS2019 platform.
Is there any instruction on how to build the 'oneTBB'?
I keep getting error like "missing TBB_LIBRARY_DIR, TBBConfig.cmake, TBB_root_dir"
Thanks for help !
System: Ubuntu 16.04
Which version of MRPT is required?
I installed MRPT 1.9, but when compiling I got a lot of undefined errors.
Also, I am assuming the required version of OpenCV is 3?
Since MRPT 1.9 doesn't work, I tried installing MRPT 1.5.
However, the MRPT installed from the Ubuntu repository seems to depend on OpenCV2, so it gives a warning libopencv_core.so.2.4, needed by libmrpt-base.so, may conflict with /libopencv_core.so.3.2
It successfully compiles, but the binary doesn't work. cv::imread()
always returns an empty matrix.
Did you compile MRPT from source? And what version did you use?
I ran VO-SF-ImagePair
and successfully got the ClusterFlow.xml file. However, I am having trouble interpreting the values. The values are all ~ 10^-2 or smaller. In contrast, the raw values of the example depth images are ~ 10^4.
Shouldn't the x, y flow values be in pixels, and z flow values be in some distance metric (e.g. mm)?
What do you think is the disadvantage of this method?Or where has improvement besides the number of clusters?
Is there any function in this code that can conveniently calculate the reported photometric and geometric residual that reported in this paper?
Best Regards
Lei Han
In the filedatasets.cpp
void Datasets::writeTrajectoryFile(poses::CPose3D &cam_pose, Eigen::MatrixXf &ddt)
{
//Don't take into account those iterations with consecutive equal depth images
if (abs(ddt.sumAll()) > 0)
{
mrpt::math::CQuaternionDouble quat;
poses::CPose3D auxpose, transf;
transf.setFromValues(0,0,0,0.5*M_PI, -0.5*M_PI, 0);
auxpose = cam_pose - transf;
auxpose.getAsQuaternion(quat);
char aux[24];
sprintf(aux,"%.04f", timestamp_obs);
f_res << aux << " " << cam_pose[0] << " " << cam_pose[1] << " " << cam_pose[2] << " ";
f_res << quat(2) << " " << quat(3) << " " << -quat(1) << " " << -quat(0) << endl;
}
}
We know the format of TUM is timestamp tx ty tz qx qy qz qw
, so why is the order of storage is
quat(2), quat(3), -quat(1), -quat(0)
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.