Coder Social home page Coder Social logo

Comments (5)

johnwlambert avatar johnwlambert commented on June 26, 2024

Hello @MengshiLi, that is correct, each LiDAR point cloud is provided in the egovehicle coordinate frame.

We do not provide the corresponding up/down LiDAR sensor from which each LiDAR return originates since we treat the LiDAR as one 64-beam LiDAR, rather than 2 separate 32-beam LiDARs. We've already placed all points into a common reference frame to make the data easier to use for our users, and we've motion compensated the sweeps for vehicle ego-motion.

However, we do provide the LiDAR laser index for each point. Another user has also been working on disambiguating the two LiDAR sweeps; their code can be found here and may be of interest to you.

from argoverse-api.

MengshiLi avatar MengshiLi commented on June 26, 2024

Thank you so much for the quick reply, @johnwlambert . On top of your clarification, when you combine two Lidar's output as a single 64-beam Lidar, how do you handle the timing difference between these two Lidars? Do you assume that these two Lidars are perfectly matched in time? Or is the timing difference between them so tiny that it can be ignored?

Also in the paper (http://openaccess.thecvf.com/content_CVPR_2019/html/Chang_Argoverse_3D_Tracking_and_Forecasting_With_Rich_Maps_CVPR_2019_paper.html), you mentioned some supplemental material several times, but we can't find it online. Could you pls share its access address? Thanks in advance!

from argoverse-api.

James-Hays avatar James-Hays commented on June 26, 2024

Hi @MengshiLi

"how do you handle the timing difference between these two Lidars? Do you assume that these two Lidars are perfectly matched in time? Or is the timing difference between them so tiny that it can be ignored?"

As John said, the points from the two lidars are motion compensated separately. Motion compensation is based entirely on the perceived movement of the AV. So if the AV is static, motion compensation isn't happening, and we're not doing anything different between the two lidar sweeps. If the AV IS moving, then we take into account that the lidar moved perhaps 0.5m between sweeps and adjust the location of the lidar returns accordingly. In practice, this works quite well for static objects in the world. However, when both the AV and an object are moving then the motion compensation won't help. So you can see cases where a moving object is "smeared" out a bit because the returns from that object came back 50ms apart and the object had moved half a meter in that time. It may be advantageous to process the two lidars independently in some cases. This situation isn't really unique to having two lidars, by the way. The entire notion of a lidar "frame" is artificial, because the points are being continuously acquired. Even with a single lidar, you can get a frame that starts and stops on the same object and thus the same "smearing" is observed at the "seam". So even with a single lidar you might want to reason about fine time granularities than "frames".

Regarding supplemental material, CVPR does host that. It's on the page you linked to :). See the "Supp" link at the very bottom.

from argoverse-api.

MengshiLi avatar MengshiLi commented on June 26, 2024

Hi @James-Hays

Thanks so much for the detailed explanation. So with time granularity being small enough, it is reasonable to align two Lidars' return beams that arrive approximately at the same time, and store them in a combined file, named by the nearest timestamp. Is this understanding correct?

from argoverse-api.

James-Hays avatar James-Hays commented on June 26, 2024

I don't think there's a simple answer to your question. For some applications, such as deep lidar object detection, I would imagine the merged, 10hz point clouds are sufficient. If you are instead trying to recover very precise shape models for dynamic objects then I think you want to maintain understanding of the time that each lidar point was detected.

from argoverse-api.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.