koide3 / direct_visual_lidar_calibration Goto Github PK
View Code? Open in Web Editor NEWA toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
Home Page: https://koide3.github.io/direct_visual_lidar_calibration/
A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
Home Page: https://koide3.github.io/direct_visual_lidar_calibration/
Hi, thanks for sharing such wonderful works. I am curious about whether it is necessary to move the lidar up and down during collection. I mean if it has been equipped on the robot or vehicle. in which it can only move forward, backward, or spin. will it be useful under the condition? Thanks.
If the issue is about calibration procedure and results:
Describe the bug
This work is quite wonderful which makes me want to validate in kitti dataset. However, an error accured when I run the prepropress command.
To Reproduce
Steps to reproduce the behavior:
'''
3. another problem, if I want to calibrate based on a pair of lidar point cloud and image, what should I do. I found that this code need to record at least 2 frames of lidar and camera message for calibrate.
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
When I created my own Docker image, I added a project, but when running the visualization window, it got very stuck and couldn't rotate the point cloud. What configuration did you make when creating the Docker image
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.
Environment:
Additional context
Add any other context about the problem here.
If the issue is about calibration procedure and results:
Environment:
OS: ubuntu 20.04
ROS Version: noetic
I am having an issue while running preprocess command
rosrun direct_visual_lidar_calibration preprocess data/ process/ -a -d -v
Error:
data_path: /lidar-cam-calib/data/
dst_path : /lidar-cam-calib/process/
input_bags:
Hi, I have some questions about the process of the generation of lidar- image. When generating lidar intensities image with lidar fov larger than 150 degrees, why use extrinsic like this:
T_lidar_camera.linear()= (Eigen::AngleAxisd(-M_PI_2, Eigen::Vector3d::UnitX())).toRotationMatrix();
why not use the same extrinsic as the one that lidar fov smaller than 150 degrees.
T_lidar_camera.linear() = (Eigen::AngleAxisd(M_PI_2, Eigen::Vector3d::UnitY()) * Eigen::AngleAxisd(-M_PI_2, Eigen::Vector3d::UnitZ())).toRotationMatrix();
Thanks!
CMake Error at CppUnitLite/CMakeLists.txt:6 (add_library):
Target "CppUnitLite" links to target "Boost::boost" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::serialization" but the target was
not found. Perhaps a find_package() call is missing for an IMPORTED
target, or an ALIAS target is missing?
CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::system" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::filesystem" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
how to deal with it?
There is an error when i run:
ros2 run direct_visual_lidar_calibration find_matches_superglue.py ouster_preprocessed
ModuleNotFoundError: No module named 'torch'
Hi, thanks for your great work! I'm still a newbie for using lidar and camera. So, I just want to ask about the coordinate frame of both sensor. Since my LiDAR (ouster) Coordinate System:
Camera Coordinate System (Azure Kinect, OpenCV-based):
Then does the calibration result, the T_lidar_camera already overcome this change? Or should I flip it by myself? Thanks!
hello, I am a beginner in the field of radar camera calibration. After I read yours paper and code, I think it is intersting and feel excited. But I have some question about the process.
In your paper, the input of preprocessing are Lidar data and Camera images, but when I read code and livox data, the input data just like *.db3. So, Is the image derived from point cloud data?maybe you use other ways to generate the images? If I have images and lidar data, can I skip preprocessing? Thanks a lot.
Before opening an issue
If the issue is about calibration procedure and results:
Describe the bug
I used ROS1. When I was preprocessing, I saw the notes you provided and needed to specify the parameters of the camera in advance, but when I tried to specify the parameters, I got the following results
What command should I type to do it?
Environment:
Thanks for your great research!
I have two questions.
I've heard that even small change can lead to significant differences in sensor calibration.
When I adjust the parameter robust_kernel_width
in function "estimate_rotation_ransac" at "estimate_pose.cpp", result changes significantly. So it seems necessary to determine the correct parameters for different dataset.
Could you please provide guidance on how to set appropriate parameters like robust_kernel_width
or error_thresh_sq
?
I've noticed a step that identifies inliers
in the "estimate_rotation_ransac" function, but it appears that this information is never utilized. I'm curious why the code estimates the pose using all correspondences rather than just the inliers
in the "estimate_pose_lsq" function.
Thanks!
Describe the bug
When I try to preprocess the data in livox_ros1.tar.gz, it always goes wrong. I followed the instructions every step of the way, but I don't know why it happens.
By the way, the highest CPU usage rate is 59.1%.
Screenshots and sample data
pi@ubuntu:~/catkin_ws$ rosrun direct_visual_lidar_calibration preprocess /mnt/hgfs/DATA_SENSOR/livox livox_preprocessed -av
data_path: /mnt/hgfs/DATA_SENSOR/livox
dst_path : livox_preprocessed
input_bags:
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_44_10.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_44_54.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_46_10.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_46_54.bag
topics in /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag:
- /livox/points : sensor_msgs/PointCloud2
- /livox/imu : sensor_msgs/Imu
- /image : sensor_msgs/Image
- /camera_info : sensor_msgs/CameraInfo
selected topics:
- camera_info: /camera_info
- image : /image
- points : /livox/points
intensity_channel: intensity
try to get the camera model automatically
warning: bag does not contain topic
: bag_filename=/mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag
: topic=/camera_info
Segmentation fault (core dumped)
Environment:
Hello author, thank you very much for sharing your current work, but the following problems occurred when reproducing your work. When using the Oster data you provided, I manually filled in the camera parameters according to the question prompts, but there were related problems with the function. The Camera parameters are given according to calib.json example。
The Environment : UBuntu 18.04 Ros:Melodic
Input :
rosrun direct_visual_lidar_calibration preprocess --data_path /home/zlzk/catkin_ws/ouster --dst_path /home/zlzk/catkin_ws/ouster_preprocessed --image_topic /image --points_topic /points --camera_model plumb_bob --camera_intrinsics 1452.71,1455.87,1265.25,1045.81 --camera_distortion_coeffs -0.0403,0.087,0.002,0.005 -adv
The Environment : UBuntu 20.04 Ros:Noetic
Input :
rosrun direct_visual_lidar_calibration preprocess --data_path /home/zlzk/catkin_ws/ouster --dst_path /home/zlzk/catkin_ws/ouster_preprocessed --image_topic /image --points_topic /points --camera_model plumb_bob --camera_intrinsics 1452.71,1455.87,1265.25,1045.81 --camera_distortion_coeffs -0.0403,0.087,0.002,0.005 -adv
I have also seen related issues. Using the dataset they provided and switching to the vlp16 branch(https://github.com/koide3/direct_visual_lidar_calibration/issues/13), the same problem still exists.
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
when repeating the instruction with running example using docker for ros1
in console:
To Reproduce
Steps to reproduce the behavior:
docker run -it --rm --net host --gpus all -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/root/.Xauthority -v /home/robot/calibration/input_bags:/tmp/input_bags -v /home/robot/calibration/preprocessed:/tmp/preprocessed koide3/direct_visual_lidar_calibration:noetic rosrun direct_visual_lidar_calibration preprocess -av /tmp/input_bags /tmp/preprocessed
Expected behavior
reproduce calibration example.
Environment:
Thank you for your open source in github. It helps me a lot in my study! But I meet some error today. Hope you can help me :)
Describe the bug
When I use initial_guess_manual , there is a wrong "glfw error 65543: GLX: Failed to create context: GLXBadFBConfig". I don't know how to solve it.
Screenshots and sample data
pi@ubuntu:~/catkin_ws$ rosrun direct_visual_lidar_calibration initial_guess_manual /home/pi/catkin_ws/ouster_pinhole_preprocessed
camera_model:plumb_bob
intrinsics :4
distortion :5
loading /home/pi/catkin_ws/ouster_pinhole_preprocessed/camera0-camera6-rslidar-chessboard_2023-07-26-17-35-23.bag.(png|ply)
image_size:1280x720
camera_fov:49.3068[deg]
glfw error 65543: GLX: Failed to create context: GLXBadFBConfig
Segmentation fault (core dumped)
Environment:
Addition
There is no error in preprocess step. I get the correspond output.
can you make a docker image for ros melodic? thanks alot!
Thanks for your great work!
I have already test your dataset with the docker image(ros2) you provide.
However, when i try with my own data, the automatic point integration fails. Maybe there is some paramters that i can tune?
Here is my data's description:
Lidar , solid-state lidar ,120 degree fov, repeative lidar
Rather than move the lidar up and down for a while, i move the lidar strait Forward for a few meters slowly( because the lidar is installed on a vehicle).
The point intergration node's output trajectory looks that the lidar did not moving at all.
hello, please I have this issue, can anyone help me please ?
/home/user/ros/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘void vlcal::DynamicPointCloudIntegrator::voxelgrid_task()’:
/home/user/ros/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:144:47: error: ‘const class gtsam::Pose3’ has no member named ‘interpolateRt’
144 | T_odom_lidar_end = T_odom_lidar_begin.interpolateRt(T_odom_lidar_end, t);
| ^~~~~~~~~~~~~
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:245: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:6503: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make: *** [Makefile:163: all] Error 2
Invoking "make -j12 -l12" failed
Environment:
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.
Environment:
Additional context
Add any other context about the problem here.
Hi,
thanks for the great project and easy calibration of lidar and camera.
As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overlay from camera image and lidar point cloud. The calibration data could directly be used to publish a fused point cloud.
Don't know what's the best way to republish, but I could imagine something like the following:
A publishing ROS 2 node would be also nice so that the results can be visualised in RVIZ.
Didn't found a project using ROS 2 for that kind of task, also cool if you have any suggestions.
Thanks again for the really nice work!
Cheers
jp
Thank you for your wonderful research
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
We do not obtain a good enough initial transform via SuperGlue. Why?
We executed the calibrator of our bags but the NID result came out approximately 0.95. Then we assume our bags are not correctly package's requirements so we tried node with your bags. But the result was not perfect(proposed). Has anyone tried and obtained perfect calibration results it?
We have a robot so we cannot move to the vertical line. But we drive 10-15 second to generate dense point cloud. Could this be the source of the error?
or
LiDAR and Camera pose are not near as your setup. Is this not a good calibration result?
Another problem is, that we are not an example bag's calibration ground truth. We can not fully understand the result in your visualizer. We would like to review in Rviz.
Lastly, Has anyone obtained a perfect result?
To Reproduce
Steps to reproduce the behavior:
Screenshots and sample data
The result of the experiment with your bag(2 bag)
Environment:
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
Unable to generate dense point clouds during the data processing stage of velodyne16 line LiDAR
nable to generate dense point clouds during the data processing stage of velodyne16 line LiDAR
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.
Environment:
Additional context
Add any other context about the problem here.
Hello, what is nelder_mead.hpp used for?
You didn't seem to mention it in your article?
Before opening an issue
If the issue is about build errors:
Describe the bug
When I want to catkin_make the repo, I got the error about fmt. I have already linked the libfmt successfully.
Environment:
Additional context
The terminal shows:
[ 85%] Linking CXX executable /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/direct_visual_lidar_calibration/preprocess
/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `fmt::v8::detail::throw_format_error(char const*)'
/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `std::locale fmt::v8::detail::locale_ref::getstd::locale() const'
/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `fmt::v8::detail::error_handler::on_error(char const*)'
Could you please tell what should I do to fix the bug?Thanks a lot.
Is your feature request related to a problem? Please describe.
This is a nice work for calibration, but we want to use more better algorithm to fit other situations, like MMI, NMI.
Describe the solution you'd like
Hope to provide plug-and-play modules or how to integrate new alignment methods into existing code.
Hello, my code has been compiled successfully, but there was a double free or corruption (out) issue during rosrun. May I know the reason for this and how to solve it? I hope to receive your reply. Thank you very much.
When I try to preprocess the data ouster for the Ouster-camera calibration it goes wrong. I followed the instructions (https://koide3.github.io/direct_visual_lidar_calibration/example/) step by step, but i get this error when i tried to do the preprocess :
ros2 run direct_visual_lidar_calibration preprocess ouster ouster_preprocessed -adv
data_path: ouster
dst_path : ouster_preprocessed
[INFO] [1694810226.604455129] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_26_51/rosbag2_2023_03_28-16_26_51_0.db3' for READ_ONLY.
[INFO] [1694810226.609745481] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
input_bags:
- ouster/rosbag2_2023_03_28-16_26_51
- ouster/rosbag2_2023_03_28-16_25_54
[INFO] [1694810226.615058493] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
topics in ouster/rosbag2_2023_03_28-16_25_54:
- /camera_info : sensor_msgs/msg/CameraInfo
- /image : sensor_msgs/msg/Image
- /points : sensor_msgs/msg/PointCloud2
selected topics:
- camera_info: /camera_info
- image : /image
- points : /points
[INFO] [1694810226.620237488] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
intensity_channel: reflectivity
[INFO] [1694810226.664490737] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
try to get the camera model automatically
[INFO] [1694810226.755682896] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
camera_model: plumb_bob
image_size : 2448 2048
intrinsics : 1454.66 1455.21 1229.26 1010.66
dist_coeffs : -0.0507013 0.111236 -0.000881838 0.000141199 -0.0588915
processing images and points (num_threads_per_bag=12)
start processing ouster/rosbag2_2023_03_28-16_25_54
[INFO] [1694810226.760323450] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
[INFO] [1694810226.869450709] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
[ros2run]: Segmentation fault
Environment:
Hi,
Thanks for the tool. When pool the docker for Humble, I get the error of architectural mismatch. It would be very beneficial to have a arm64 support. Maybe a guideline for Jetson devices would be an alternative solution.
hi,this is a great work,i would like to run it in my computer,but my data was splited into PCD files and PNG files.So how can i make it work with my data.Thanks for your answer.
[ 77%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_auto
[ 80%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_manual
[ 87%] Built target preprocess
[ 90%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/preprocess_map
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::update_image()': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:239: undefined reference to
cv::imshow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&)'
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::on_mouse(int, int, int, int)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:281: undefined reference to
cv::imshow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&)'
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::InitialGuessManual(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:122: undefined reference to
cv::namedWindow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int)'
/usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:123: undefined reference to cv::setMouseCallback(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void (*)(int, int, int, int, void*), void*)' collect2: error: ld returned 1 exit status make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/initial_guess_manual.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_manual] Error 1 make[1]: *** [CMakeFiles/Makefile2:504: direct_visual_lidar_calibration/CMakeFiles/initial_guess_manual.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... /usr/bin/ld: CMakeFiles/initial_guess_auto.dir/src/initial_guess_auto.cpp.o: in function
vlcal::InitialGuessAuto::read_correspondences(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::shared_ptr<vlcal::Frame const> const&)':
/home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_auto.cpp:63: undefined reference to cv::imread(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)' /usr/bin/ld: CMakeFiles/initial_guess_auto.dir/src/initial_guess_auto.cpp.o: in function
cv::Mat::Mat(int, int, int, void*, unsigned long)':
/usr/include/opencv4/opencv2/core/mat.inl.hpp:568: undefined reference to cv::Mat::updateContinuityFlag()' /usr/bin/ld: /usr/include/opencv4/opencv2/core/mat.inl.hpp:548: undefined reference to
cv::error(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, char const*, char const*, int)'
collect2: error: ld returned 1 exit status
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/initial_guess_auto.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_auto] Error 1
make[1]: *** [CMakeFiles/Makefile2:855: direct_visual_lidar_calibration/CMakeFiles/initial_guess_auto.dir/all] Error 2
/usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function cv::Mat::Mat(int, int, int, void*, unsigned long)': /usr/include/opencv4/opencv2/core/mat.inl.hpp:568: undefined reference to
cv::Mat::updateContinuityFlag()'
/usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function vlcal::PreprocessMap::save_lidar_data(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::shared_ptr<vlcal::Frame const> const&)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:210: undefined reference to
cv::imwrite(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&, std::vector<int, std::allocator > const&)'
/usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:211: undefined reference to cv::imwrite(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)' /usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function
cv::Mat::Mat(int, int, int, void*, unsigned long)':
/usr/include/opencv4/opencv2/core/mat.inl.hpp:548: undefined reference to cv::error(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*, char const*, int)' /usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function
vlcal::PreprocessMap::run(int, char**)':
/home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:69: undefined reference to cv::imread(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)' /usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:75: undefined reference to
cv::imwrite(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&, std::vector<int, std::allocator > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/preprocess_map.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/preprocess_map] Error 1
make[1]: *** [CMakeFiles/Makefile2:963: direct_visual_lidar_calibration/CMakeFiles/preprocess_map.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j36 -l36" failed
Why the "pose_derivatives_t0[i] = H_pose_0_a + H_pose_delta * H_delta_0" instead of “H_pose_delta * H_delta_0”, different with "pose_derivatives_t1[i] = H_pose_delta * H_delta_1"
I'd like to express my sincere gratitude for open-sourcing such an outstanding piece of work. This is, by far, the most user-friendly and efficient calibration tool I have ever used!
While testing on your provided dataset, I found that the results were exceptionally accurate. I noticed in your paper that you mentioned using the "pseudo" ground truth as a reference. I'm keen to see how the results from your algorithm align with this "pseudo" ground truth to understand its precision further.
Additionally, I am interested in using a fisheye camera. Could you guide me on how to adjust the configuration parameters in the files to accommodate this type of camera?
If you could share this information and the data, I would be immensely grateful.
Thank you once again for your remarkable contribution to the community.
In this function of Eigen::Isometry3d PoseEstimation::estimate_pose_lsq(),the line "Sophus::SE3d T_camera_lidar = Sophus::SE3d(init_T_camera_lidar.matrix());" has an error of "error: static assertion failed: Cannot format an argument. To make type T formattable provide a formatter specialization:"
My environment:Ubuntu20.04,ROS1,gtsam4.0.2,ceres1.14.0;When I used catkin_make, I encountered the following errors:
Base path: /home/doongli/calib_ws
Source space: /home/doongli/calib_ws/src
Build space: /home/doongli/calib_ws/build
Devel space: /home/doongli/calib_ws/devel
Install space: /home/doongli/calib_ws/install
[ 3%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o
[ 6%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o
[ 9%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/calib/visual_camera_calibration.cpp.o
In file included from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/common/estimate_pose.cpp:9:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/thirdparty/Sophus/sophus/ceres_manifold.hpp:3:10: fatal error: ceres/manifold.h: No such file or directory
3 | #include <ceres/manifold.h>
| ^~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:154: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/calib/visual_camera_calibration.cpp:10:10: fatal error: ceres/autodiff_first_order_function.h: No such file or directory
10 | #include <ceres/autodiff_first_order_function.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:284: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/calib/visual_camera_calibration.cpp.o] Error 1
In file included from /usr/local/include/gtsam/base/types.h:31,
from /usr/local/include/gtsam/global_includes.h:22,
from /usr/local/include/gtsam/base/Vector.h:27,
from /usr/local/include/gtsam/base/Matrix.h:26,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/tbb/task_scheduler_init.h:21:154: note: #pragma message: TBB Warning: tbb/task_scheduler_init.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.
21 | #pragma message("TBB Warning: tbb/task_scheduler_init.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.")
| ^
In file included from /usr/local/include/gtsam/geometry/Unit3.h:36,
from /usr/local/include/gtsam/geometry/Rot3.h:25,
from /usr/local/include/gtsam/geometry/Pose3.h:24,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/tbb/mutex.h:21:140: note: #pragma message: TBB Warning: tbb/mutex.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.
21 | #pragma message("TBB Warning: tbb/mutex.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.")
| ^
In file included from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor.hpp:61,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:11:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor_impl.hpp: In member function ‘virtual boost::shared_ptrgtsam::GaussianFactor vlcal::IntegratedCT_GICPFactor_<TargetFrame, SourceFrame>::linearize(const gtsam::Values&) const’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor_impl.hpp:104:69: error: ‘Hat’ is not a member of ‘gtsam::SO3’
104 | H_transed_pose.block<3, 3>(0, 0) = pose.linear() * -gtsam::SO3::Hat(source_pt.template head<3>());
| ^~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘virtual void vlcal::DynamicPointCloudIntegrator::insert_points(const ConstPtr&)’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:31: error: ‘PriorFactor’ is not a member of ‘gtsam’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^~~~~~~~~~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:9: error: parse error in template argument list
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:55: error: no match for ‘operator>’ (operand types are ‘’ and ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’})
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| gtsam::noiseModel::Isotropic::shared_ptr {aka boost::shared_ptrgtsam::noiseModel::Isotropic}
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:43:6: note: candidate: ‘template bool boost::operator>(const boost::optional&, const boost::optional&)’
43 | bool operator > ( optional const& x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:43:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:77:6: note: candidate: ‘bool boost::operator>(const boost::optional&, const T&) [with T = boost::shared_ptrgtsam::noiseModel::Isotropic]’
77 | bool operator > ( optional const& x, T const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:77:38: note: no known conversion for argument 1 from ‘’ to ‘const boost::optional<boost::shared_ptrgtsam::noiseModel::Isotropic >&’
77 | bool operator > ( optional const& x, T const& y )
| ~~~~~~~~~~~~~~~~~~~^
/usr/include/boost/optional/detail/optional_relops.hpp:111:6: note: candidate: ‘template bool boost::operator>(const T&, const boost::optional&)’
111 | bool operator > ( T const& x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:111:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:146:6: note: candidate: ‘template bool boost::operator>(const boost::optional&, boost::none_t)’
146 | bool operator > ( optional const& x, none_t y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:146:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: couldn’t deduce template parameter ‘T’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:180:6: note: candidate: ‘template bool boost::operator>(boost::none_t, const boost::optional&)’
180 | bool operator > ( none_t x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:180:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:625:9: note: candidate: ‘template<class IteratorT, class ForwardRange> typename boost::enable_if<boost::mpl::not_<boost::is_base_and_derived<boost::iterator_range_detail::iterator_range_tag, ForwardRange> >, bool>::type boost::operator>(const ForwardRange&, const boost::iterator_range&)’
625 | operator>( const ForwardRange& l, const iterator_range& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:625:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::iterator_range’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:714:9: note: candidate: ‘template<class Iterator1T, class Iterator2T> bool boost::operator>(const boost::iterator_range&, const boost::iterator_range&)’
714 | operator>( const iterator_range& l, const iterator_range& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:714:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::iterator_range’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:724:9: note: candidate: ‘template<class IteratorT, class ForwardRange> typename boost::enable_if<boost::mpl::not_<boost::is_base_and_derived<boost::iterator_range_detail::iterator_range_tag, ForwardRange> >, bool>::type boost::operator>(const boost::iterator_range&, const ForwardRange&)’
724 | operator>( const iterator_range& l, const ForwardRange& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:724:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: couldn’t deduce template parameter ‘IteratorT’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/variant/variant.hpp:42,
from /usr/include/boost/variant.hpp:17,
from /usr/local/include/gtsam/inference/EliminateableFactorGraph.h:23,
from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:25,
from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
from /usr/local/include/gtsam/nonlinear/LevenbergMarquardtOptimizer.h:23,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:6:
/usr/include/boost/blank.hpp:83:13: note: candidate: ‘bool boost::operator>(const boost::blank&, const boost::blank&)’
83 | inline bool operator>(const blank&, const blank&)
| ^~~~~~~~
/usr/include/boost/blank.hpp:83:23: note: no known conversion for argument 1 from ‘’ to ‘const boost::blank&’
83 | inline bool operator>(const blank&, const blank&)
| ^~~~~~~~~~~~
In file included from /usr/include/boost/lexical_cast/detail/converter_lexical.hpp:50,
from /usr/include/boost/lexical_cast/try_lexical_convert.hpp:44,
from /usr/include/boost/lexical_cast.hpp:32,
from /usr/include/boost/date_time/string_parse_tree.hpp:13,
from /usr/include/boost/date_time/period_parser.hpp:14,
from /usr/include/boost/date_time/date_facet.hpp:23,
from /usr/include/boost/date_time/gregorian/gregorian_io.hpp:16,
from /usr/include/boost/date_time/gregorian/gregorian.hpp:31,
from /usr/include/boost/date_time/posix_time/time_formatters.hpp:12,
from /usr/include/boost/date_time/posix_time/posix_time.hpp:24,
from /usr/local/include/gtsam/nonlinear/LevenbergMarquardtOptimizer.h:26,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:6:
/usr/include/boost/array.hpp:336:10: note: candidate: ‘template<class T, long unsigned int N> bool boost::operator>(const boost::array<T, sz>&, const boost::array<T, sz>&)’
336 | bool operator> (const array<T,N>& x, const array<T,N>& y) {
| ^~~~~~~~
/usr/include/boost/array.hpp:336:10: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::array<T, sz>’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘void vlcal::DynamicPointCloudIntegrator::voxelgrid_task()’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:144:43: error: ‘const class gtsam::Pose3’ has no member named ‘interpolateRt’
144 | T_odom_lidar = T_odom_lidar_begin.interpolateRt(T_odom_lidar_end, t);
| ^~~~~~~~~~~~~
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:245: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:990: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j36 -l36" failed
There is an error when i use docker image,the log fellow this:
glfw error 65544: X11: The DISPLAY environment variable is missing
failed to initialize GLFW
lidar: Velodyne 16 line radar
Registration failed, point cloud registration is chaotic 。
display warning:
warning: large point timestamp (1687765073.393736 > 1.0) found!!
: assume that point times are absolute and convert them to relative
: replace_frame_stamp=1 wrt_first_frame_timestamp=1
warning: use first point timestamp as frame timestamp
: frame=1687765073.393736 point=1687765073.293307
Before opening an issue
If the issue is about build errors:
Describe the bug
Error occurred when install iridescence
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots and sample data
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp: In function ‘void ImGui_ImplGlfw_UpdateMonitors()’:
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp:776:9: error: ‘glfwGetMonitorWorkarea’ was not declared in this scope
glfwGetMonitorWorkarea(glfw_monitors[n], &x, &y, &w, &h);
^~~~~~~~~~~~~~~~~~~~~~
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp:776:9: note: suggested alternative: ‘glfwGetMonitorName’
glfwGetMonitorWorkarea(glfw_monitors[n], &x, &y, &w, &h);
Environment:
Hi,
This work looks great and I look forward to using it. However, I'm running into the following error when using catkin_make
on ROS Noetic:
/home/paul/catkin_ws/src/direct_visual_lidar_calibration/thirdparty/Sophus/sophus/common.hpp:35:10: fatal error: fmt/format.h: No such file or directory
35 | #include <fmt/format.h>
| ^~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:174: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:9306: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Thanks
I think the bug might be caused for the version of matplotlib
When I was using find_matches_superglue.py to automatic initial guess, there raises a ValueError: "Colormap turbo is not recognized. Possible values are: Accent, Accent_r, blabla..." in line 136 in direct_visual_lidar_calibration/scripts/find_matches_superglue.py.
I use matplotlib 3.1.2, could you like to give me your matplotlib version to check the bug can be solved or not?
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
I am using a fisheye lens for lidar camera calibration, but I'm encountering issues with the calibration process. Specifically, I am using rectified fisheye images for the camera calibration process. The distortion model I'm using, "equidistant," is throwing an error. I referred to the official pages "http://wiki.ros.org/camera_calibration" and "http://docs.ros.org/en/melodic/api/sensor_msgs/html/namespacesensor__msgs_1_1distortion__models.html#a51077dbd502bdea20f46493798ca807b" for information. According to these resources, for the fisheye lens in the Noetic version, the distortion model should be "equidistant."
I am able to preprocess it, I am facing an issue in Initial guess manual, throwing an error " error: unknown camera model equidistant"
command: rosrun direct_visual_lidar_calibration initial_guess_manual /media/user/fisheye/fish_test_2/process/
Upon reviewing the code, I noticed that the distortion model for the fisheye lens is specified as "FISHEYE." I attempted to replace this model with "equidistant," in calib.json but it did not work as expected. Although I am able to obtain the image for choosing corresponding points, the target lidar points are missing. in the terminal it is showing Camera fov as 0 deg.
Environment:
I have installed the repo based on the guidelines provided in
https://koide3.github.io/direct_visual_lidar_calibration/
I have collected the rosbag data and when I try to run my data. All I see is the ros2 commands.
It would be really helpful to include the ros1 commands in the documentation.
I tried installing ros2 along with ros1 but I was facing issues with sourcing and building the project. So I request for the ros1 commands.
Hello, thank you for your great work, but as a beginner, I have a question about generating intensity images from point clouds. In paper, you create a virtual camera when get point clouds from livox (Am I understanding it correctly?). Does this mean that I need to know the camera's internal parameters, distortion parameters, radar placement, orientation, and spatial position in advance? I don't think I need external parameters, because my goal is to solve for external parameters. And then use this virtual camera to project the point cloud data onto the image? Does this result in inaccurate projection of the point cloud position onto the image?
Before opening an issue
If the issue is about build errors:
If the issue is about calibration procedure and results:
If the issue is about the algorithm:
Describe the bug
Thank you for your amazing work, I noticed a very small bug when I was modifying it.
p.add("dst_path", 2);
seems like it should be changed to p.add("dst_path", 1);
.
Otherwise it will add dst_path
twice. At this point, when I add other command line param, an error maybe reported:
terminate called after throwing an instance of 'boost::wrapexceptboost::program_options::multiple_occurrences'
what(): option '--dst_path' cannot be specified more than once"
To Reproduce
`p.add("dst_path", 2);' :
`p.add("dst_path", 1);' :
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.
Environment:
Additional context
Add any other context about the problem here.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.