Comments (20)
@jeychandar
As @Seekerzero said, for SuperGlue-based alignment, the point cloud needs enough density to generate a LiDAR image with good similarity with the camera image. Try the dynamic points accumulator to densify the point cloud.
@Seekerzero
Thanks always for your help!
from direct_visual_lidar_calibration.
@Seekerzero @koide3 As you were mentioned earlier I referred the documentation and collected the sample data Up and Down for longer rosbag record since blickfeld LiDAR is repetitive. I tried the dynamic points accumulator to densify the point cloud by using this command rosrun direct_visual_lidar_calibration preprocess blickfield blickfield_preprocessed_check -adv . It really helped to extract the intensity Image and extrinsic matrix. Finally I able to test Lidar_camera_fusion with resulted extrinsic matrix and it is accurate. Once again Thank you for your support! @koide3 and @Seekerzero
from direct_visual_lidar_calibration.
@Seekerzero @koide3 Hi Sir, I looked through your code and unable to resolve this issue yet. Could you help me out where to check regarding this issue.
from direct_visual_lidar_calibration.
Hi jey, could you send your commandline for running the preprocess step to here. Also, an example of your lidar PointCloud2 fileds format would be helpful.
Besides, from the picture you uploaded here, your point cloud is shown with different colors, which is coming from the intensity values, meaning that the intensities values is correctly saved into the ply file.
Thanks.
from direct_visual_lidar_calibration.
@Seekerzero Thank you for your reply Sir. I ran this command rosrun direct_visual_lidar_calibration preprocess blickfield blickfield_preprocessed_check -av and also tried -i intensity additionally with this command. I attached pointcloud2 format and publishing option below.
from direct_visual_lidar_calibration.
Did you get any error like this: "error: failed to determine point intensity channel automatically". If not, the intensity value for each points should be correctly saved into the ply file.
from direct_visual_lidar_calibration.
I did not get any errors like what you have mentioned above Sir.
from direct_visual_lidar_calibration.
So everything should be correct. Like I said in the previous post, if you see the Turbo color from the viewer, the intensity value for each point is correctly loaded and saved. Please also check this .
from direct_visual_lidar_calibration.
what could be the reason that these images are empty? The preprocess step is failing to return .bag_matches.json. Does it have anything to do with lidar point cloud frame convention difference?
from direct_visual_lidar_calibration.
The preprocess step should only return one json file, which is calib.json.
Regarding to the intensity image, I would recommend first to check if the values are returned correctly here. You should see something like [0, ... , 0 , index] since you have a empty image. If there are all zeros for the intensities like I guess, you might look into the lidar_proj, otherwise, it could be something related to opencv.
from direct_visual_lidar_calibration.
@Seekerzero As you mentioned earlier. I cross verified with some other tool called opencalib/ calib_anything to check if the rosbag has intensity recorded. I am able to extract lidar intensity. since this repo direct_visual_calibration tool creates .ply file with intensity, I am able to analyse that lidar_proj have no issues. As you mentioned it could related to opencv I am not sure where could I able to verify that. Could you help me out to resolve this issue?. For your reference I am using blickfeld driver https://git.tu-berlin.de/ecschuetz/ros_assembly/-/blob/main/src/ros_blickfeld_driver_src-v1.4.3/modules/ros_blickfeld_driver_core/src/blickfeld_driver_point_cloud_parser.cpp?ref_type=heads
from direct_visual_lidar_calibration.
You should set a break point after the line of code that I linked in the previous post (or you basically just save the intensity values to a files) to see if the values are all zeros here. This is the first thing you should check.
I believe it is not related to the lidar driver since the ply file are successfully generated and verified.
from direct_visual_lidar_calibration.
@Seekerzero I have saved the intensity values into text files and it returned zeros's.
from direct_visual_lidar_calibration.
Thanks, then you shall look into the here to see if the values were properly assigned.
from direct_visual_lidar_calibration.
@Seekerzero I have attached two images below one for livox(From the rosbag given in this repo) and another for blickfeld sensor. You can notice the frame convention difference. Do you think it impacts the LiDAR projection?.
Livox Sensor:
Blickfeld sensor:
from direct_visual_lidar_calibration.
yup, I believe so. Since the projection range is calculated based on here, and given the fov that calculated from your sensor 72 degree, these points might get all filtered.
from direct_visual_lidar_calibration.
@Seekerzero Yes true it is because of the frame convention. I am able to get lidar intensity image and also intensity values in somewhere in the middle and other values are zero. I tried with this command rosrun direct_visual_lidar_calibration initial_guess_auto blickfield_preprocessed_check
It throws error like this still
loading blickfield_preprocessed_check/2024-06-08-12-48-43.bag.(png|ply)
error: failed to open blickfield_preprocessed_check/2024-06-08-12-48-43.bag_matches.json
Aborted (core dumped)
from direct_visual_lidar_calibration.
after run preprocess, you should run
rosrun direct_visual_lidar_calibration find_matches_superglue.py <your preprocessed folder>
before you run the initial_guess_auto.
from direct_visual_lidar_calibration.
@Seekerzero Yes correct working able to get extrinsic matrix thank you for your guidance. I have another question don't you feel there is some offset in superglue generated png? Let me try this extrinsic value first.
from direct_visual_lidar_calibration.
superglue might not work if the lidar image does not get enough similarities to the image capture (not enough density). You might want to look into the slam approach for collect and preprocess the data to accumulate the lidar image. Please look into the document example and check that.
from direct_visual_lidar_calibration.
Related Issues (20)
- How to record data using Insta360 and Livox MID-360?
- Understanding results HOT 1
- The rosbag package was not processed during execution HOT 1
- Hello, thank you very much for open sourcing your algorithm and data. How to obtain the reference calibration results of related data sets
- Add a license HOT 1
- Ros2 Support for Foxy Distro HOT 2
- How to publish coloured point-cloud in real-time? HOT 2
- Hello,I'm having problems running docker's preprocessing HOT 3
- set_model_matrix error during build HOT 4
- Initial guess (Automatic) in Docker HOT 22
- Symbol lookup error when running preprocess node HOT 3
- Disable translation estimation when calibration HOT 1
- colcon build fail
- Error glfw HOT 8
- Questions regarding camera lidar preprocessing HOT 1
- Camera Info version bug HOT 1
- Poor results with example dataset HOT 2
- Unknown camera model rational_polynomial
- Result is off and wrong projection during calibration
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from direct_visual_lidar_calibration.