Coder Social home page Coder Social logo

Real world scale? about multicol-slam HOT 7 OPEN

antithing avatar antithing commented on August 18, 2024
Real world scale?

from multicol-slam.

Comments (7)

urbste avatar urbste commented on August 18, 2024 1

Hi. I am glad that you got it to work! There are a couple of initializations to look into. I tried some but as initialization was not one of my main concerns I just did the quick and dirty solution.

I experimented with the following algorithms:

  • Eigensolver of Kneip
  • Stewenius
  • 17 pt.

They are all included in OpenGV.
The six-point solution (https://github.com/jonathanventura/multi-camera-motion) seems quite promising. I added it to my fork of OpenGV but never got to test it (https://github.com/urbste/opengv/tree/master/include/opengv/relative_pose/modules/sixpt_ventura) However it will only compile on Ubuntu as apparently the computer generated solver file is to big for the Visual Studio compiler.

You will have to change the initialization functions and probably also the initialization matching pipeline.

from multicol-slam.

antithing avatar antithing commented on August 18, 2024 1

Thank you for getting back to me yet again. Sorry for all the questions, but if you have a moment could you point me in the right direction to get started? (I am hoping it is as simple as swapping out a few functions, but I somehow doubt that is the case!) Would these initialization methods give repeatable, real world-scale translation values? Thank you again!

from multicol-slam.

jingdongZhang avatar jingdongZhang commented on August 18, 2024

@urbste @antithing I am confused at the extrinsics of the cameras,could U tell me how to get it?

from multicol-slam.

CanCanZeng avatar CanCanZeng commented on August 18, 2024

Hi @urbste @antithing , I'm struggling in make this project work with my insta360 panoramic camera, which provides a dual fisheye video. The code is working sometimes with rather simple dataset, but will fail in more natural scene. And I find that if I just use single camera, the code works well.
I checked the code in initialization and I suspect that the strategy now is not very accurate and cannot make the scale converge to real world scale. Actually, I think the scale of muti-camera slam will converge to the calibration scale, since the map point created by camera A will project to camera B later, then the projection operation will involve the calibration result, that is to say the 3D point in world coordinate will project to body system and then project to B camera system, the transformation between body and B camera is in real world scale, so if the map is not in real world scale, this projection will fail.
So if the scene is very large, and the baseline between cameras are rather small, the code may work, but if the baseline cannot be omitted in a small scene, the system will be very vulnerable.
I know maybe the thought above is wrong, but I just want to discuss with someone, sorry to bother you if you are not working on this topic.

from multicol-slam.

tanjunyao7 avatar tanjunyao7 commented on August 18, 2024

Hi @urbste @antithing , I'm struggling in make this project work with my insta360 panoramic camera, which provides a dual fisheye video. The code is working sometimes with rather simple dataset, but will fail in more natural scene. And I find that if I just use single camera, the code works well. I checked the code in initialization and I suspect that the strategy now is not very accurate and cannot make the scale converge to real world scale. Actually, I think the scale of muti-camera slam will converge to the calibration scale, since the map point created by camera A will project to camera B later, then the projection operation will involve the calibration result, that is to say the 3D point in world coordinate will project to body system and then project to B camera system, the transformation between body and B camera is in real world scale, so if the map is not in real world scale, this projection will fail. So if the scene is very large, and the baseline between cameras are rather small, the code may work, but if the baseline cannot be omitted in a small scene, the system will be very vulnerable. I know maybe the thought above is wrong, but I just want to discuss with someone, sorry to bother you if you are not working on this topic.

hi, can you share your calibration file for insta360 dual fisheye camera? thanks

from multicol-slam.

PuYuuu avatar PuYuuu commented on August 18, 2024

Hi @urbste @antithing , I'm struggling in make this project work with my insta360 panoramic camera, which provides a dual fisheye video. The code is working sometimes with rather simple dataset, but will fail in more natural scene. And I find that if I just use single camera, the code works well. I checked the code in initialization and I suspect that the strategy now is not very accurate and cannot make the scale converge to real world scale. Actually, I think the scale of muti-camera slam will converge to the calibration scale, since the map point created by camera A will project to camera B later, then the projection operation will involve the calibration result, that is to say the 3D point in world coordinate will project to body system and then project to B camera system, the transformation between body and B camera is in real world scale, so if the map is not in real world scale, this projection will fail. So if the scene is very large, and the baseline between cameras are rather small, the code may work, but if the baseline cannot be omitted in a small scene, the system will be very vulnerable. I know maybe the thought above is wrong, but I just want to discuss with someone, sorry to bother you if you are not working on this topic.

hi, can you share your calibration file for insta360 dual fisheye camera? thanks

I am also curious about this. After I calibrated the the camera of insta360 according to the readme.md, it was lost after running several frames each time. I wonder if there is anything special to pay attention to during calibration or initialization. Thank you!

from multicol-slam.

urbste avatar urbste commented on August 18, 2024

Hey. I think for a panoramic 360 camera it is difficult to assume a 'baseline' between the two sensors. Of course there is small difference between the projection centers of of both fisheye cameras, but they are as close together as possible by design so that the images can be stitched to a seemless panoramic image. For robust scale estimation in natural environments with several meters or more of scene depth you will need a baseline of multiple cm at least. (Look at stereo cameras for example). So even though you have two sensors it would probably make more sense to model them as one camera, so you would rather do monocular SLAM. As for MultiCol Slam it can work with one camara but the initialization routine is not made for that. ORBSlam has an elaborate Routine for monocular systems as initialisation is very crucial in the monocular case. Have a look at OpenVSLAM (although discontinued). They had a working mode for panoramic cameras.
Cheers

from multicol-slam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.