Coder Social home page Coder Social logo

multicol-slam's Introduction

!!!!This is an early version of MultiCol-SLAM!!!!

MultiCol-SLAM

Author: Steffen Urban (urbste at googlemail.com).

MultiCol-SLAM is a multi-fisheye camera SLAM system. We adapt the SLAM system proposed in ORB-SLAM and ORB-SLAM2 and extend it for the use with fisheye and multi-fisheye camera systems.

News

  • 25/10/2016 added paper: Paper
  • See a video here: VIDEO

The novel methods and concepts included in this new version are:

  • MultiKeyframes
  • Generic camera model (Scaramuzza's polynomial model).
  • MultiCol - a generic method for bundle adjustment for multi-camera systems.
  • a hyper graph (g2o) formulation of MultiCol
  • dBRIEF and mdBRIEF a distorted and a online learned, masked version of BRIEF.
  • Multi-camera loop closing
  • minimal (non)-central absolute pose estimation (3 pts) instead of EPnP which is non-minimal (6 pts)

In terms of performance the following things were modified:

  • exchanged all tranformations and vectors from cv::Mat to cv::Matx and cv::Vec
  • changed matrix access (descriptors, images) from .at to .ptr()
  • set terminate criteria for bundle adjustment and pose estimation using g2o::SparseOptimizerTerminateAction

A paper of the proposed SLAM system will follow. Here some short descriptions on how the multi-camera integration works.

The MultiCol model is explained extensively in the paper given below. Here we briefly recapitulate the content: The MultiCol model is given by:

the indices are object point i, observed at time t, in camera c. The camera projection is given by \pi and we chose a general projection function, making this model applicable to a varity of prevalent (central) cameras, like perspective, fisheye and omnidirectional.

For a single camera, we could omit the matrix M_t. This yields the classic collinearity equations. Latter is depicted in the following figure. Each observation m' has two indices, i.e. t and i.

To handle multi-camera systems, the body frame is introduced, i.e. a frame that describes the motion of the multi-camera rig:

If we are optimizing the exterior orientation of our multi-camera system, we are actually looking for an estimate of matrix M_t. Now each observation has three indices.

The graphical representation of MultiCol can be realized in a hyper-graph and g2o can be used to optimize vertices of this graph:

1. Related Publications:

@Article{UrbanMultiColSLAM16,
  Title={{MultiCol-SLAM} - A Modular Real-Time Multi-Camera SLAM System},
  Author={Urban, Steffen and Hinz, Stefan},
  Journal={arXiv preprint arXiv:1610.07336},
  Year={2016}
}
@Article{UrbanMultiCol2016,
  Title = {{MultiCol Bundle Adjustment: A Generic Method for Pose Estimation, Simultaneous Self-Calibration and Reconstruction for Arbitrary Multi-Camera Systems}},
  Author = {Urban, Steffen and Wursthorn, Sven and Leitloff, Jens and Hinz, Stefan},
  Journal = International Journal of Computer Vision,
  Year = {2016},
  Pages = {1--19}
 }
@Article{urban2015improved,
  Title = {{Improved Wide-Angle, Fisheye and Omnidirectional Camera Calibration}},
  Author = {Urban, Steffen and Leitloff, Jens and Hinz, Stefan},
  Journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
  Year = {2015},
  Pages = {72--79},
  Volume = {108},
  Publisher = {Elsevier},
}

2. Requirements

  • C++11 compiler

  • As the accuracy and speed of the SLAM system also depend on the hardware, we advice you to run the system on a strong CPU. We mainly tested the system using a laptop with i7-3630QM @2.4GHz and 16 GB of RAM running Win 7 x64. So anything above that should be fine.

3. Camera calibration

We use a rather generic camera model, and thus MultiCol-SLAM should work with any prevalent central camera. To calibrate your cameras follow the instructions on link. The systems expects a calibration file with the following structure:

# Camera Parameters. Adjust them!
# Camera calibration parameters camera back
Camera.Iw: 754
Camera.Ih: 480

# hyperparameters
Camera.nrpol: 5
Camera.nrinvpol: 12

# forward polynomial f(\rho)
Camera.a0: -209.200757992065
Camera.a1: 0.0 
Camera.a2: 0.00213741670953883
Camera.a3: -4.2203617319086e-06
Camera.a4: 1.77146086919594e-08

# backward polynomial rho(\theta)
Camera.pol0: 293.667187375663
.... and the rest pol1-pol10
Camera.pol11: 0.810799620714366

# affine matrix
Camera.c: 0.999626131079017
Camera.d: -0.0034775192597376
Camera.e: 0.00385134991673147

# principal point
Camera.u0: 392.219508388648
Camera.v0: 243.494438476351

You can find example files in ./Examples/Lafida

4. Multi-camera calibration

You can find example files in ./Examples/Lafida TODO

5.Dependencies:

Pangolin

For visualization. Get the instructions here: Pangolin

OpenCV 3

Required is at least OpenCV 3.0. The library can be found at: OpenCV.

Eigen 3

Required by g2o. Version 3.2.9 is included in the ./ThirdParty folder. Other version can be found at Eigen.

OpenGV

OpenGV can be found at: OpenGV. It is also included in the ./ThirdParty folder. We use OpenGV for re-localization (GP3P) and relative orientation estimation during initialization (Stewenius).

DBoW2 and g2o

As ORB-SLAM2 we use modified versions of DBoW2 and g2o for place recognition and optimization respectively. Both are included in the ./ThirdParty folder. The original versions can be found here: DBoW2, g2o.

6. Build MultiCol-SLAM:

Ubuntu:

This is tested with Ubuntu 16.04. Before you build MultiCol-SLAM, you have to build and install OpenCV and Pangolin. This can for example be done by running the following:

Build Pangolin:

sudo apt-get install libglew-dev cmake
git clone https://github.com/stevenlovegrove/Pangolin.git
cd Pangolin
mkdir build
cd build
cmake -DCPP11_NO_BOOST=1 ..
make -j

Build OpenCV 3.1

This is just a suggestion on how to build OpenCV 3.1. There a plenty of options. Also some packages might be optional.

sudo apt-get install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
git clone https://github.com/Itseez/opencv.git
cd opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_CUDA=OFF ..
make -j
sudo make install

this will take some time...

Build MultiCol-SLAM

git clone https://github.com/urbste/MultiCol-SLAM.git MultiCol-SLAM
cd MultiCol-SLAM
chmod +x build.sh
./build.sh

for the rest run the build.sh This will create a library and an executable multi_col_slam_lafida, that you can run as shown in 7. below.

Windows:

This description assumes that you are familiar with building libraries using cmake and Visual Studio. Required is at least Visual Studio 2013. The first step is to build Pangolin.

  • Download or clone Pangolin.
  • Run cmake to create a VS project in $PATH_TO_PANGOLIN$/build.
  • You can add options if you like, but the most basic set of options is sufficient for MultiCol-SLAM and should build without issues.
  • Open build/Pangolin.sln switch to Release and build the solution (ALL_BUILD)

Next build OpenCV 3.1.

  • Download or clone OpenCV 3.1.
  • Run cmake to create a VS project in $PATH_TO_OpenCV$/build.
  • You might want to switch of building the CUDA libraries as this takes a long time
  • Open $PATH_TO_OpenCV$/build/OpenCV.sln switch to Release and build the solution. Open folder CMakeTargets. Right click on INSTALL and select build.

Now download or clone MultiCol-SLAM. Next build DBoW2:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/DBoW2/build
  • If you get configuration errors, you likely did not set the OpenCV_DIR
  • Set this path to $PATH_TO_OpenCV$/build/install and run Generate
  • $MultiCol-SLAM_PATH$/ThirdParty/DBoW2/build/DBoW2.sln, switch to Release and build the solution

Next build g2o:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/g2o/build
  • If you get any errors, you might want to set the EIGEN3_INCLUDE_DIR. Either you set it to your own version of Eigen or use the version that is provided in the ThirdParty folder $MultiCol-SLAM_PATH$/ThirdParty/Eigen.
  • Hit Generate and you will get the solution g2o.sln. Open it, select Release and build the solution.

In a last step, we will build OpenGV. Unfortunately this takes quite some time under Windows (hours).

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/OpenGV/build
  • Open $MultiCol-SLAM_PATH$/ThirdParty/OpenGV/build/OpenGV.sln switch to Release and build the solution

Finally, we can build MultiCol-SLAM:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/build
  • If you get any errors:
  • Set OpenCV_DIR: $PATH_TO_OpenCV$/build/install
  • Set Pangolin_DIR: $PATH_TO_PANGOLIN$/build/src
  • Open $MultiCol-SLAM_PATH$/build/MultiCol-SLAM.sln and build the solution (ALL_BUILD)

7. Run examples

By now you should have compiled all libraries and MultiCol-SLAM. If everthing went well, you will find an executable in the folder ./Examples/Lafida/Release if you are running Windows and ./

First download the indoor dynamic dataset: dataset Then extract the folder, e.g. to the folder

$HOME$/Downloads/IndoorDynamic

The executable multi_col_slam_lafida expects 4 paths. The first is the path to the vocabulary file. The second is the path to the settings file. The third is the path to the calibration files. The fourth is the path to the images. In our example, we could run MultiCol-SLAM:

./Examples/Lafida/multi_col_slam_lafida ./Examples/small_orb_omni_voc_9_6.yml  ./Examples/Lafida/Slam_Settings_indoor1.yaml ./Examples/Lafida/ $HOME$/Downloads/IndoorDynamic

Important: To evaluate the trajectory you will need to transform the result into the rigid body coordinate system. The transformation matrix MCS_to_Rigid_body can be found here.

multicol-slam's People

Contributors

andersw88 avatar urbste avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multicol-slam's Issues

vs2015 needs an extra #include.

Thank you for this code, I have been struggling to get fisheye lens working in a slam system, this is great.

Quick note, in visual studio 2015, i had to add:

#include <iterator>

to the file:
mdBriefExtractorOct.cpp
to get :
std::back_inserter

to compile.

Intrinsic Camera Parameters Calibration Issue -- Initialization failed

I am trying to calibrate a Gear 360 (only one 180 fisheye lens right now) and am using ImprovedOcamCalib as well. The calibration parameters are as follows:

%YAML:1.0

Camera Parameters. Adjust them!

Camera calibration parameters camera back

Camera.Iw: 640
Camera.Ih: 640

hyperparameters

Camera.nrpol: 6
Camera.nrinvpol: 3

forward polynomial f(\rho)

Camera.a0: -172.296104925459
Camera.a1: 0
Camera.a2: 0.00160105662555009
Camera.a3: -6.34303796841769e-06
Camera.a4: 6.56202451752210e-08
Camera.a5: -1.83867951781103e-10

backward polynomial rho(\theta)

Camera.pol0: 17.2540014422096
Camera.pol1: 80.6878056249852
Camera.pol2: 298.076583841793
Camera.pol3: 336.001425859136

affine matrix

Camera.c: 1.001231302713875
Camera.d: -4.098360811880249e-04
Camera.e: -3.266710888569786e-04

principal point

Camera.u0: 3.222578566035110e+02
Camera.v0: 3.181511158806330e+02

create a mirror mask for fisheye cameras

Camera.mirrorMask: 1

The forward poly is extracted from "ss", and the backward poly is extracted from "pol"...I am using 15 grayscale images, downscaled to 640x640 for calibration. The calibration checkerboard is 7x5 (29mm).

The strange thing is that the tracking can initialize with the sample calibration file (Fisheye0.yaml) that comes in the repository!!!

Any idea what I might be missing here?
Just to emphasize: this is only for one of the lenses... I am trying to use a single camera for tracking and point cloud generation... I guess there is no point in using two lenses that are 180 degree back to back (<-->) with no overlap...

build failling on ubunutu 16.04

I am trying to build your code on ubuntu 16.04 and it is failing with
In file included from /home/poine/work/multicol/MultiCol-SLAM/src/g2o_MultiCol_sim3_expmap.cpp:20:0:
/home/poine/work/multicol/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:117:22: error: expected ‘)’ before ‘<’ token
std::unordered_map<size_t, int>& kp_to_cam1,
^
/home/poine/work/multicol/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:175:8: error: ‘unordered_map’ in namespace ‘std’ does not name a template type
std::unordered_map<size_t, int> keypoint_to_cam1;

It looks like there is an issue with the std::unordered_map template. I am using
poine@nina /work/multicol/MultiCol-SLAM/build $ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1
16.04.2) 5.4.0 20160609

Any idea?

Thanks in advance

build failing on Windows 10

I have gotten everything to work - doing my own fresh GitHub loads for Eigen, Pangolin and OpenCV. My problem comes when I hit opengv. This is the message I get from main.cpp when I try to build opengv:

1>------ Build started: Project: opengv, Configuration: Release Win32 ------
1>  main.cpp
1>C:\Users\Jeff\Documents\GitHub\MultiCol-SLAM\ThirdParty\OpenGV\src\absolute_pose\modules\main.cpp(284): error C2280: 'Eigen::Block<Derived,3,1,true> &Eigen::Block<Derived,3,1,true>::operator =(const Eigen::Block<Derived,3,1,true> &)': attempting to reference a deleted function
1>          with
1>          [
1>              Derived=Eigen::Matrix<double,3,3,0,3,3>
1>          ]
1>  c:\users\jeff\documents\github\multicol-slam\thirdparty\opengv\third_party\eigen\src/Core/Block.h(143): note: compiler has generated 'Eigen::Block<Derived,3,1,true>::operator =' here
1>          with
1>          [
1>              Derived=Eigen::Matrix<double,3,3,0,3,3>
1>          ]
1>========== Build: 0 succeeded, 1 failed, 1 up-to-date, 0 skipped ==========

Using Singal Fisheye camera loss tracking in indoor_dynamic dataset.

Dear KIT team,
I have successfully run the example.
But, when I try to use only one fisheye camera of three to run the program in indoor_dynamic dataset,it always lost tracking .
I modified extractor.nFeatures: 1000 in the setting_indoor1.yaml as you wrote in the paper. And I'm sure the input images and calibration parameters are only belong to the choosen camera(I used cam2 in this exaple dataset).

Is it right? Where am I setting wrong?

Hope you can give any reply, thank you!!!

Suggested Fisheye camera

Hi Steffen,

First of all, thanks a lot for your work on multicol-SLAM. I have used your fisheye calibration before, and it worked quite well.

For the fisheye-based ORB SLAM, do you mind recommending some fisheye cameras that you used? It's great if we can connect the camera directly to the PC machine or nvidia board to do SLAM.

Thank you,

Best regards,
Tuan

Camera Calibration

Hi Steffen Urban and your group,

Really interesting and promising approach on the SLAM research, I would like to do use my own data set, i.e. my own set of images to run your system. I have some question regarding the camera calibration parameters:

First, I tried to use your recommendation for the single camera calibration but it seems that it does not provide the "backward polynomial rho" values after I ran such M-module.

Next for the Multi-cam calibration, there is no additional information regarding how you find the transformation relationship between three cameras, could I use the OpenCV for such rotational vector and translation vector value?

Cheers,
Rongen

What is each parameter of calib_results.txt pointing to?

Hello,
first of all thank you for that plugin.

We calibrated using OCamCalib Toolbox.
What parameters can be read from calib_results.txt?

As a result of the calibration, the following file was acquired.

<calib_results.txt>
#polynomial coefficients for the DIRECT mapping function (ocam_model.ss in MATLAB). These are used by cam2world

5 -1.124087e+01 0.000000e+00 1.084859e-04 -9.160804e-07 2.898989e-09

#polynomial coefficients for the inverse mapping function (ocam_model.invpol in MATLAB). These are used by world2cam

1574 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
(abridgement)

#center: "row" and "column", starting from 0 (C convention)

383.000000 511.000000

#affine parameters "c", "d", "e"

1.000000 0.000000 0.000000

#image size: "height" and "width"

768 1024

As a result found from this result,
① Polynomial
  a0 = -1.124087e+01
  a1 = 0.000000e+00
  a2 = 1.084854e-04
  a3 = -9.160804e-07
  a4 = 2.898989e-09

②affine parameters
   c = 1.000000
  d = 0.000000
  e = 0.000000

③image_height = 768
 Image_width = 1024

④ Absolute projection of lens center (cX, cY)
  cX = 511.000000
   cY = 383.000000

⑤ Center coordinates of image (X, Y)
   X = 0.5 x image_width - cX
  Y = 0.5 x image_height - cY

  X = 0.5 * 1024 - 511.000000 = 1
  Y = 0.5 * 768 - 383.000000 = 1

In order to move Multicol - slam you still need the following parameters.
· Camera.nrpol / Camea.nrinvpol (hyperparameters)
· Camera.pol 0 to Camera.pol 11 (backward polynomial rho (theta))
· Camera.u 0 / Camera.v 0 (principal point)

Can you read it?

Thank you.

Hom2Cayley Produces same results for negated angles

Hi,
I have an omnidirectional camera system with 4 cameras in it. There is almost a 90 degree rotation between each camera pair.
In case we set CAM 0 (Left camera) as the reference, then CAM 1 (Front camera) and CAM 3 (Back camera) have 90 and -90 rotations around the Y axis according to CAM 0.
The strange thing is that the Cayley transform, maps these two rotations to same values ! This results the orientation of Camera 3 and Camera 1 to be the same in SLAM map-viewer which is wrong !

Map viewer with these 3 cameras in it:
https://drive.google.com/file/d/1jFVdt5FLhZR9UPOMFgLGRfUZZU6qmJmP/view?usp=sharing

In the link above Camera 3 (blue frame to the left) should face the opposite direction (180 degree around Y).

This is the code I use to convert the transformation matrix to Cayley in python:

import numpy as np
def rot2cayley(R):
    eyeM=np.eye(3)
    C1=R-eyeM
    C2=R+eyeM
    C= C1*np.linalg.inv(C2)
    cayley=np.array([[-C[1,2]],[C[0,2]],[-C[0,1]]])
    return cayley

def hom2cayley(M):
    R=np.array([[M[0,0],M[0,1],M[0,2]],[M[1,0],M[1,1],M[1,2]],[M[2,0],M[2,1],M[2,2]]])
    C=rot2cayley(R)
    answer=np.array([[C[0,0]],[C[1,0]],[C[2,0]],[M[0,3]],[M[1,3]],[M[2,3]]])
    return answer

Then converting 90 and -90 degree rotations around Y (expressed with quaternions) to Cayley parameters:

from scipy.spatial.transform import Rotation as R
import numpy as np
r=R.from_quat([0.0,0.707,0.0,0.707])
print(hom2cayley_r(r.as_matrix()))
r=R.from_quat([0.0,-0.707,0.0,0.707])
print(hom2cayley_r(r.as_matrix()))

Result in :
array([[-0. ],
[-0.5],
[ 0. ]])

array([[ 0. ],
[-0.5],
[ 0. ]])

Could anyone suggest what am I doing wrong or is this a property of the Cayley transform ?

Coordinate frame

Hello urbste and thank you for sharing your great work!

I would like to ask you where in the code can i see which reference frame is used.

In which coordinates did you get the camera pose and how do you translate and rotate them in order to move to the camera coordinate frame? Thanks in advance!

Running exception: Framebuffer with requested attributes not available

Firstly, thanks a lot for authors contributing this awesome system for the world.

I was trying to run this system with data set this repo. provided, but an exception occurs when I run it. Is there any clue or instructions to fix it?

Run command

./multi_col_slam_lafida ../small_orb_omni_voc_9_6.yml ./MultiCamSys_Calibration.yaml ./Slam_Settings_indoor1.yaml ../../../indoor_dynamic

Requested to finish local mapper, loop closer and map publisher
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.terminate called after throwing an instance of 'std::invalid_argument'
  what():  'Look' and 'up' vectors cannot be parallel when calling ModelViewLookAt.

a error after a few seconds of running the project

hello,I run this project but met a problem.

Error in `./Examples/Lafida/multi_col_slam_lafida': free(): invalid next size (fast): 0x00007f9d70339c70 ***
已放弃 (核心已转储)

It heppened after a few seconds of running the program...
Could U please give me a suggestion of this ???

map points in one camera only.

Hi, sorry to bother you yet again, I am just having one last issue.

When I start the system, it moves to TRACKING , but I only see keypoints in one frame (and map points in only one section of the point cloud/map, in front of one camera). If I very slowly rotate the cameras, more points are added, and the map is built, but I must be very slow and careful or tracking is lost.

In the examples, I see that all frames are filled with roughly an equal number of keypoints, and the map is built from every camera as soon as initialisation occurs. If you have a minute, do you have any thoughts on why this might be not happening for me?

(Once the 360 degree map is built, tracking is great, but getting to that point is tricky!)

Thank you once again for your awesome work and code!

multicamera calibration?

Hi, can you explain the necessary calibration process for me please? I can calibrate each camera separately using the info and links you provide, but how should I get the extrinsics between cameras?

Do I need to do this, as I would with a stereo based system? Or do i just calibrate each camera separately?

Thank you again for the code! :)

The result of ATE

Hello, urbste. Thanks for your great contributions. I have tested your public dateset IndoorDynamic and found some problems.

When I use three cameras , the tracking is very robust, but the result of ATE is not well.
The reuslt of IndoorDynamic in your paper(MULTICOL-SLAM - A MODULAR REAL-TIME MULTI-CAMERA SLAM SYSTEM) is 1.8cm , but the result I got is about 20cm. I've tried it many times.

Firstly, I did not change any parameters. I doubt that I am doing something wrong. Can you help me explain why I got the result so bad?

Multi-Col SLAM for ORB-SLAM3

I have worked with Multi-Col SLAM. It is really useful for our development I am wondering if you have created any package to work with multiple cameras in ORB-SLAM3?

Cayley Parameters

Dear MultiCol-SLAM team and users,

great job at implementing multi fisheye cameras into ORB-SLAM!

I have a question regarding the Cayley parameters. If I understood correctly, these are quaternions that are scaled towards real part being equal 1. So if a camera is turned 180 degrees, the real part of a quaternion tends towards zero and the Cayley parameters tend towards infinity, a singularity is introduced.

I wondered, are there other benefits of using Cayley parameters I might have missed? Will there be different problems introduced, if for example quaternions or homogeneous transformation matrices are implemented?

Thanks for any help in advance!

Best,
Franz

have you run this code in a bigger room or open environment?

HI, thanks for you code and i got a quiet nice result in the lab. then i tested it in a lobby of a building where part of the ceiling is almost 8 meters and part is 3 meters. the whole pace merely is 10*20. I run the project with the same cameras around the room for twice and got a werid result.
the trajectory of the first and second are almost overlapped but they look like i was climbing alhough i was in the same ground level.
Do you have any idea about the reason ?is that because of the threshold of depth or should i decrease the threshold of far points? i was totally confused.
Thanks again for you project and have a nice day!! :)

ROS Usage

Hi, have you tested if the system is applicable with ROS?

Advantages over ORB-SLAM2?

How do you think about the advantages of MultiCol-SLAM over ORB-SLAM2, such as more robustness, less likely tracking losing, and so on? thank you! By the way, I‘m optimistic about the prospects for development of multi-cam slam.

More than 3 cameras in the system !

Hi, Can anyone suggest how to change the number of cameras in the system to 4 ?
I have changed:
CameraSystem.nrCams in the MultiCamSys_Calibration.yaml to 4 and added the parameters for the 4th camera.
Added InteriorOrientationFisheye3.yaml for the 4th camera.
put the images for the 4tch camera in cam3 directory of the dataset and changed images_and_timestamps.txt respectively.
changed the following lines in the mult_col_slam_lafida.cpp :
line 173 to:
vstrImageFilenames.resize(4);
line 188 till 200:

			string pathimg1, pathimg2, pathimg3,pathimg4;
			if (!(iss >> timestamp >> pathimg1 >> pathimg2 >> pathimg3 >> pathimg4))
				break;
			vTimestamps.push_back(timestamp);
			vstrImageFilenames[0].push_back(path2imgs + '/' + pathimg1);
			vstrImageFilenames[1].push_back(path2imgs + '/' + pathimg2);
			vstrImageFilenames[2].push_back(path2imgs + '/' + pathimg3);
			vstrImageFilenames[3].push_back(path2imgs + '/' + pathimg4);
		}
		++cnt;

	}
}

However, when I run the program I get a segmentation fault error. what am I missing here ?

Changing the number of cameras

I am attempting to change the number of cameras that MultiCol SLAM uses as input.

So far in multi_col_slam_lafida.cpp I altered LoadImagesAndTimestamps to resize vstrImageFilenames to hold input from two cameras, not three.

vstrImageFilenames.resize(2); //Formerly 3

I also commented out the statement that would normally put the path to the third image in the vector.

//vstrImageFilenames[2].push_back(path2imgs + '/' + pathimg3);

Unfortunately, I am now getting errors after the images have loaded, in the main loop.
Is there a way to alter MultiCol SLAM to take a different number of cameras as input?
Thank you.

Error output below, with annotations noting where the errors were found to occur in debugging.

Start processing sequence ...
Images in the sequence: 734

     //This error occurs in cTracking, GrabImageSet, in the else statement

Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.---Feature Extraction (85ms) - ImageId: 0---

    //The program reaches the end of the main loop and repeats
    //Printed in cTracking, GrabImageSet, in the else statement

---Feature Extraction (33ms) - ImageId: 1---

    //This error occurs in cTracking in the call to Track()

---matching time (12)--- nr:1157
[sm::SampleConsensusModel::getSamples] Can not select 8 unique points out of 0!
[sm::RandomSampleConsensus::computeModel] No samples could be selected!

    //This error occurs after the call to TrackMultiColSLAM before it reaches the end of the main loop.

OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/denald/software/opencv-3.1.0/modules/highgui/src/window.cpp, line 281
terminate called after throwing an instance of 'cv::Exception'
what(): /home/denald/software/opencv-3.1.0/modules/highgui/src/window.cpp:281: error: (-215) size.width>0 && size.height>0 in function imshow

Aborted (core dumped)

Coordinates in world space?

Hi again, one more query...

I have the tracking running great, but each time i run it, and move a measured amount along one vector (only moving forward), the resulting coordinates returned are different each time. I assumed that as there is extrinsic info involved, the coordinates were in world space, or at least repeatable. Is this not the case?

Should I be seeing the same measurement each time i run the system?

Thanks yet again.

tracking lost

Hi Steffen Urban and your group,
I have trouble when I run the MultiCol-SLAM , the problem is when I run use my own indoor dataset , I got very little Multi-KeyFrames.With time go on, it will track lost.I'm using three cameras and I have calibrated the camera parameter . I suspect this is related to the relative location of the three cameras. So when you run MultiCol-SLAM , what is the angle value between the three cameras ? Or is there anything else I should pay attention to?
Cheers,
xpenu

Pass in a starting rotation?

Hi again, I am looking at integrating an IMU into the system. I don't need it tightly integrated, i just want to pass a starting rotation, so that the initial rotation matches the IMU when the map/tracking is initialized.

If you have time could you please point me in the right direction to start this?
Thank you again!

coordinate system?

Hi, sorry for the multiple messages. Just a quick question, have you changed the coordinate system from orbslam2? i am trying to align multi-col with stereo orbslam2, and the axes are all different.

Thanks!

Segmentation fault

Hi. I am trying to run MultiCol-SLAM on the dataset in the example.
However, after I ran the command:

./Examples/Lafida/multi_col_slam_lafida ./Examples/small_orb_omni_voc_9_6.yml ./Examples/Lafida/Slam_Settings_indoor1.yaml ./Examples/Lafida/ $HOME$/Downloads/IndoorDynamic

I get the following error (segmentation fault)

MultiCol-SLAM Copyright (C) 2015-2016 Steffen Urban.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.


Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!


Loading camera and MCS calibrations
Camera Parameters: 
- fps: 25
- color order: RGB (ignored if grayscale)

Extractor Parameters: 
- Number of Features: 400
- Scale Levels: 8
- Scale Factor: 1.2
- Fast Threshold: 20
- Learn Masks: 0
- Descriptor Size (byte): 32
- Use AGAST: 0
- FAST/AGAST Type: 2
- Score: HARRIS

Motion Model: Enabled


-------
Start processing sequence ...
Images in the sequence: 0

System shutdown
Requested to finish local mapper, loop closer and map publisher
init done
MakeCurrent: Not available with non-pangolin window.
All threads stopped...
-------

Segmentation fault (core dumped)

I have installed all the dependecies as required by the compilation instructions. Am i missing something?
Thank you @urbste

How to generate the Jacobian matrix?

The function mcsJacs1 in g2o_MultiCol_vertices_edges.cpp file.
The code seems to be generated using a tool. What tool did you use to generate the code?
thank you very much!

Question about the normsAll[c] during initialization

Hi, Steffen

Thanks for your paper and code. But I have a question about the way to find the cam with most reconstructed points.
It's easy to understand that the counter "nr_recon[c]" should bigger than the threshold "60", but why the middle norm "normsAll[c]" should bigger than 0.06 ?
In my view, according to the bear1 cross product R*bear2 formula( bear1 x (Rbear2)), we can get a normal perpendicular to vector "bear1" and "Rbear2", and the norm of the normal is positive correlation with the area between two vector. So I think the norm smaller, the rotation "R" will be more accurately, isn't it?

// find cam with most reconstructed points
bool init = false;
for (int c = 0; c < nrCams; ++c)
{
if (nr_recon[c] > 60 &&
normsAll[c] > 0.06)
{
init = true;
bestCam = c;
if (c > 0)
if (normsAll[c] > normsAll[c - 1])
bestCam = c;
}
}

Another calibration question...

Hi, sorry to bother you again. I have found a multi-camera calibration toolbox, that gives me the following result:

Extrinsics:

Camera #1 :
1  0  0  0
0  1  0  0
0  0  1  0
0  0  0  1

Camera #2 :

 0.4609181  -0.009186284    -0.8873951     -303.5744
0.03479596     0.9993646   0.007727831      23.35298
 0.8867603   -0.03443966     0.4609448     -163.4985
         0             0             0             1

Camera #3 :

 -0.7048325     0.1634035     0.6902974      333.0892
-0.02166609     0.9676953      -0.25119     -86.11362
 -0.7090429    -0.1920029    -0.6785227      -465.258
          0             0             0             1

Camera #4 :

  0.5077068    -0.0381636     0.8606842      375.7625
-0.08449859     0.9919959    0.09383074     -4.202368
 -0.8573761    -0.1203651     0.5004183     -113.0211
          0             0             0             1

Camera #5 :

-0.748888   -0.05602026    -0.6603246     -143.0585
0.1751792     0.9442421    -0.2787815     -114.4791
0.6391237    -0.3244512    -0.6973179      -515.224
        0             0             0             1

In the file MultiCamSys_Calibration.yaml, the content is:

`# camera back
CameraSystem.cam1_1: -0.0238361786473007 #r1
CameraSystem.cam1_2: -2.05998171167958 #r2
CameraSystem.cam1_3: 0.695126790868671 #r3
CameraSystem.cam1_4: -0.140202124607334 #t1
CameraSystem.cam1_5: 0.0219677971160655 #t2
CameraSystem.cam1_6: -0.0226322662838432 #t3

camera right

CameraSystem.cam2_1: -0.0103943566650926 #r1
CameraSystem.cam2_2: 1.12505249943085 #r2
CameraSystem.cam2_3: -0.402901183146028 #r3
CameraSystem.cam2_4: 0.108753418890423 #t1
CameraSystem.cam2_5: 0.0636197216520153 #t2
CameraSystem.cam2_6: 0.0657911760105832 #t3

camera left

CameraSystem.cam3_1: 0 #r1
CameraSystem.cam3_2: 0 #r2
CameraSystem.cam3_3: 0 #r3
CameraSystem.cam3_4: -0.00157612288268783 #t1
CameraSystem.cam3_5: 0.103615531247527 #t2
CameraSystem.cam3_6: 0.201416323496156 #t3`

I can just add the translation values from my extrinsic matrices, but what are the #r values? Radians?

Thank you again! (the toolbox, in case you are interested, is: https://sites.google.com/site/prclibo/toolbox).

Real world scale?

Hi again, I have this code running great in a 5 camera system, it is perfect for what i need except for the monocular scale issue. (Scale is not world-scale, and is different every run)

This is obviously because of the initialization(unless I am doing something wrong), did you ever look at using the camera baselines to get real-world scale values? Would this be a complicated thing to add?

Thank you again!

How to run this whole porject smoothly?

Dear @FookSong
I have downloaded and compiled this project very quickly, then I try to run this program. When executing ``./Examples/Lafida/multi_col_slam_lafida ./Examples/small_orb_omni_voc_9_6.yml ./Examples/Lafida/Slam_Settings_indoor2.yaml ./Examples/Lafida/Calibration/ /home/mm/Downloads/indoor_dynamic`

`The viewer window has disappeared after one second.
Screenshot from 2019-07-04 20-13-46
what is the confusing thing is that I didn't change any source files, would you please to share your running steps or give me some suggestions?

The ATE would be much bigger

Hey,Dr.Urban
I tried to restore the example(indoor_dynamic)in your paper. I also try something else in LaFiDa ,It's a nice dataset.
But unfortunately,I always got ATE 10 times(like the .jpg below ) than you wrote in your paper.I transformed my trajectory to the rigidbody ,but always the same.Is there anyway to get a better result of ATE. I think your work is really cool!!! So I want to restore it and do some experiments with MultiCol-SLAM.
Any help(new version or something else) will be greatful!!

sincerely!
indoor_dynamic

Can not achieve a good closed loop in dataset

Hey,Dr.Urban
I tried the dataset(outdoor_large_loop)from your project website.It's a nice dataset ,I get a good Trajectory .But unfortunately,it can not achieve a good closed loop in the dataset .I also try the big vocabulary of ORB_SLAM2, but The result is same. So i want to know the Possible reason led to the fail
sclosed loop? What can I do to overcome it?

Any help(new version or something else) will be greatful!!
sincerely!

2018-05-29 10-39-26

Multiple independant cameras?

Does this code require that the transformations between these multiple cameras are fixed? or can they be independant and be solved for

MultiCol-SLAM Ubuntu 14.04 Build Error

Hi,

I am trying to build the MultiCol-SLAM on Ubuntu 14.04 but am getting the error below. Do you have any suggestions how to fix it?


In file included from /home/ardalan/ORB-Fisheye/MultiCol-SLAM/src/g2o_MultiCol_sim3_expmap.cpp:20:0:
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:117:22: error: expected ‘)’ before ‘<’ token
std::unordered_map<size_t, int>& kp_to_cam1,
^
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:175:3: error: ‘unordered_map’ in namespace ‘std’ does not name a type
std::unordered_map<size_t, int> keypoint_to_cam1;
^
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:176:3: error: ‘unordered_map’ in namespace ‘std’ does not name a type
std::unordered_map<size_t, int> keypoint_to_cam2;
^
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h: In member function ‘cv::Vec2d MultiColSLAM::VertexSim3Expmap_Multi::cam_map1(const Vector3d&, int) const’:
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:142:17: error: ‘keypoint_to_cam1’ was not declared in this scope
int camIdx = keypoint_to_cam1.find(ptIdx)->second;
^
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h: In member function ‘cv::Vec2d MultiColSLAM::VertexSim3Expmap_Multi::cam_map2(const Vector3d&, int) const’:
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/include/g2o_MultiCol_sim3_expmap.h:153:17: error: ‘keypoint_to_cam2’ was not declared in this scope
int camIdx = keypoint_to_cam2.find(ptIdx)->second;
^
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/src/g2o_MultiCol_sim3_expmap.cpp: At global scope:
/home/ardalan/ORB-Fisheye/MultiCol-SLAM/src/g2o_MultiCol_sim3_expmap.cpp:25:48: error: expected constructor, destructor, or type conversion before ‘(’ token
VertexSim3Expmap_Multi::VertexSim3Expmap_Multi(
^
make[2]: *** [CMakeFiles/MultiCol-SLAM.dir/src/g2o_MultiCol_sim3_expmap.o] Error 1
make[1]: *** [CMakeFiles/MultiCol-SLAM.dir/all] Error 2
make: *** [all] Error 2

Thanks,
Ardalan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.