Coder Social home page Coder Social logo

koide3 / direct_visual_lidar_calibration Goto Github PK

View Code? Open in Web Editor NEW
580.0 20.0 74.0 836 KB

A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]

Home Page: https://koide3.github.io/direct_visual_lidar_calibration/

CMake 1.46% Dockerfile 0.67% C++ 96.27% Python 1.60%
calibration camera lidar nid ros ros2

direct_visual_lidar_calibration's Introduction

direct_visual_lidar_calibration

This package provides a toolbox for LiDAR-camera calibration that is:

  • Generalizable: It can handle various LiDAR and camera projection models including spinning and non-repetitive scan LiDARs, and pinhole, fisheye, and omnidirectional projection cameras.
  • Target-less: It does not require a calibration target but uses the environment structure and texture for calibration.
  • Single-shot: At a minimum, only one pairing of a LiDAR point cloud and a camera image is required for calibration. Optionally, multiple LiDAR-camera data pairs can be used for improving the accuracy.
  • Automatic: The calibration process is automatic and does not require an initial guess.
  • Accurate and robust: It employs a pixel-level direct LiDAR-camera registration algorithm that is more robust and accurate compared to edge-based indirect LiDAR-camera registration.

Documentation: https://koide3.github.io/direct_visual_lidar_calibration/
Docker hub: koide3/direct_visual_lidar_calibration

Build Docker Image Size (latest by date)

213393920-501f754f-c19f-4bab-af82-76a70d2ec6c6

Video

Dependencies

Getting started

  1. Installation / Docker images
  2. Data collection
  3. Calibration example
  4. Program details

License

This package is released under the MIT license.

Publication

Koide et al., General, Single-shot, Target-less, and Automatic LiDAR-Camera Extrinsic Calibration Toolbox, ICRA2023, [PDF]

Contact

Kenji Koide, National Institute of Advanced Industrial Science and Technology (AIST), Japan

direct_visual_lidar_calibration's People

Contributors

koide3 avatar seekerzero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

direct_visual_lidar_calibration's Issues

Results are poor ! Why?

Thank you for your wonderful research

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug

  • We do not obtain a good enough initial transform via SuperGlue. Why?

  • We executed the calibrator of our bags but the NID result came out approximately 0.95. Then we assume our bags are not correctly package's requirements so we tried node with your bags. But the result was not perfect(proposed). Has anyone tried and obtained perfect calibration results it?

  • We have a robot so we cannot move to the vertical line. But we drive 10-15 second to generate dense point cloud. Could this be the source of the error?
    or

  • LiDAR and Camera pose are not near as your setup. Is this not a good calibration result?

  • Another problem is, that we are not an example bag's calibration ground truth. We can not fully understand the result in your visualizer. We would like to review in Rviz.

  • Lastly, Has anyone obtained a perfect result?

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. We followed all the steps exactly in the documentation.

Screenshots and sample data

The result of the experiment with your bag(2 bag)

image

Environment:

  • OS: [e.g. Ubuntu 20.04]
  • ROS version: [e.g., ROS1 noetic

`dst_path` param is repeated in the `positional_option_description`

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug

Thank you for your amazing work, I noticed a very small bug when I was modifying it.

p.add("dst_path", 2); seems like it should be changed to p.add("dst_path", 1);.

Otherwise it will add dst_path twice. At this point, when I add other command line param, an error maybe reported:

terminate called after throwing an instance of 'boost::wrapexceptboost::program_options::multiple_occurrences'
what(): option '--dst_path' cannot be specified more than once"

To Reproduce

`p.add("dst_path", 2);' :

add2

`p.add("dst_path", 1);' :

add1

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.

Environment:

  • OS: [e.g. Ubuntu 20.04]
  • ROS version: [e.g., ROS1 noetic]

Additional context
Add any other context about the problem here.

Installation error

Before opening an issue
If the issue is about build errors:

Describe the bug
Error occurred when install iridescence

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots and sample data
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp: In function ‘void ImGui_ImplGlfw_UpdateMonitors()’:
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp:776:9: error: ‘glfwGetMonitorWorkarea’ was not declared in this scope
glfwGetMonitorWorkarea(glfw_monitors[n], &x, &y, &w, &h);
^~~~~~~~~~~~~~~~~~~~~~
/home/agilex/iridescence/thirdparty/imgui/backends/imgui_impl_glfw.cpp:776:9: note: suggested alternative: ‘glfwGetMonitorName’
glfwGetMonitorWorkarea(glfw_monitors[n], &x, &y, &w, &h);

Environment:

  • Ubuntu 18.04
  • ROS 1

Failed to calibrate the data we collected ourselves

lidar: Velodyne 16 line radar
Registration failed, point cloud registration is chaotic 。

display warning:

warning: large point timestamp (1687765073.393736 > 1.0) found!!
: assume that point times are absolute and convert them to relative
: replace_frame_stamp=1 wrt_first_frame_timestamp=1
warning: use first point timestamp as frame timestamp
: frame=1687765073.393736 point=1687765073.293307

How can I replace the NID or MI method?

Is your feature request related to a problem? Please describe.
This is a nice work for calibration, but we want to use more better algorithm to fit other situations, like MMI, NMI.

Describe the solution you'd like
Hope to provide plug-and-play modules or how to integrate new alignment methods into existing code.

Request for the "pseudo" ground truth from livox_ros1 dataset and configuration for Fisheye Cameras

I'd like to express my sincere gratitude for open-sourcing such an outstanding piece of work. This is, by far, the most user-friendly and efficient calibration tool I have ever used!

While testing on your provided dataset, I found that the results were exceptionally accurate. I noticed in your paper that you mentioned using the "pseudo" ground truth as a reference. I'm keen to see how the results from your algorithm align with this "pseudo" ground truth to understand its precision further.

Additionally, I am interested in using a fisheye camera. Could you guide me on how to adjust the configuration parameters in the files to accommodate this type of camera?

If you could share this information and the data, I would be immensely grateful.

Thank you once again for your remarkable contribution to the community.

Can this methodology works for repeative lidar which only have 120 degree fov

Thanks for your great work!
I have already test your dataset with the docker image(ros2) you provide.
However, when i try with my own data, the automatic point integration fails. Maybe there is some paramters that i can tune?
Here is my data's description:
Lidar , solid-state lidar ,120 degree fov, repeative lidar
Rather than move the lidar up and down for a while, i move the lidar strait Forward for a few meters slowly( because the lidar is installed on a vehicle).
The point intergration node's output trajectory looks that the lidar did not moving at all.

Some errors about fmt when catkin_make

Before opening an issue
If the issue is about build errors:

Describe the bug
When I want to catkin_make the repo, I got the error about fmt. I have already linked the libfmt successfully.

Environment:

  • OS: [Ubuntu 20.04]
  • ROS version: [ROS1 noetic]

Additional context
The terminal shows:
[ 85%] Linking CXX executable /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/direct_visual_lidar_calibration/preprocess

/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `fmt::v8::detail::throw_format_error(char const*)'

/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `std::locale fmt::v8::detail::locale_ref::getstd::locale() const'

/usr/bin/ld: /home/bit/Workspace/direct_visual_lidar_calibration/devel/lib/libdirect_visual_lidar_calibration.so: undefined reference to `fmt::v8::detail::error_handler::on_error(char const*)'

Could you please tell what should I do to fix the bug?Thanks a lot.

ROS 2 node for camera lidar fusion

Hi,

thanks for the great project and easy calibration of lidar and camera.

As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overlay from camera image and lidar point cloud. The calibration data could directly be used to publish a fused point cloud.

Don't know what's the best way to republish, but I could imagine something like the following:

  • colorised point cloud (with image pixel values)
  • depth image (mainly from point cloud)
  • projection as a point cloud

A publishing ROS 2 node would be also nice so that the results can be visualised in RVIZ.

Didn't found a project using ROS 2 for that kind of task, also cool if you have any suggestions.

Thanks again for the really nice work!

Cheers
jp

Velodyne 16 line radar cannot complete calibration

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug
Unable to generate dense point clouds during the data processing stage of velodyne16 line LiDAR
2023-04-13 10-53-58 的屏幕截图
nable to generate dense point clouds during the data processing stage of velodyne16 line LiDAR

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.

Environment:

  • OS:Ubuntu 20.04
  • ROS version: ROS1 noetic

Additional context
Add any other context about the problem here.

Some questions about input data in paper

hello, I am a beginner in the field of radar camera calibration. After I read yours paper and code, I think it is intersting and feel excited. But I have some question about the process.
In your paper, the input of preprocessing are Lidar data and Camera images, but when I read code and livox data, the input data just like *.db3. So, Is the image derived from point cloud data?maybe you use other ways to generate the images? If I have images and lidar data, can I skip preprocessing? Thanks a lot.

Installation Problem

CMake Error at CppUnitLite/CMakeLists.txt:6 (add_library):
Target "CppUnitLite" links to target "Boost::boost" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?

CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::serialization" but the target was
not found. Perhaps a find_package() call is missing for an IMPORTED
target, or an ALIAS target is missing?

CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::system" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?

CMake Error at gtsam/CMakeLists.txt:107 (add_library):
Target "gtsam" links to target "Boost::filesystem" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
how to deal with it?

"Segmentation fault" in Preprocessing

Describe the bug
When I try to preprocess the data in livox_ros1.tar.gz, it always goes wrong. I followed the instructions every step of the way, but I don't know why it happens.

By the way, the highest CPU usage rate is 59.1%.

Screenshots and sample data

pi@ubuntu:~/catkin_ws$ rosrun direct_visual_lidar_calibration preprocess /mnt/hgfs/DATA_SENSOR/livox livox_preprocessed -av
data_path: /mnt/hgfs/DATA_SENSOR/livox
dst_path : livox_preprocessed
input_bags:
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_44_10.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_44_54.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_46_10.bag
- /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_46_54.bag
topics in /mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag:
- /livox/points : sensor_msgs/PointCloud2
- /livox/imu : sensor_msgs/Imu
- /image : sensor_msgs/Image
- /camera_info : sensor_msgs/CameraInfo
selected topics:
- camera_info: /camera_info
- image      : /image
- points     : /livox/points
intensity_channel: intensity
try to get the camera model automatically
warning: bag does not contain topic
       : bag_filename=/mnt/hgfs/DATA_SENSOR/livox/rosbag2_2023_03_09-13_42_46.bag
       : topic=/camera_info
Segmentation fault (core dumped)

Environment:

  • OS: Ubuntu 20.04
  • ROS version: ROS1 noetic

Double free or corruption (out) Aborted (core dumped)----The graphics window flashed and the program reported an error.

Hello author, thank you very much for sharing your current work, but the following problems occurred when reproducing your work. When using the Oster data you provided, I manually filled in the camera parameters according to the question prompts, but there were related problems with the function. The Camera parameters are given according to calib.json example。

The Environment : UBuntu 18.04 Ros:Melodic

Input :
rosrun direct_visual_lidar_calibration preprocess --data_path /home/zlzk/catkin_ws/ouster --dst_path /home/zlzk/catkin_ws/ouster_preprocessed --image_topic /image --points_topic /points --camera_model plumb_bob --camera_intrinsics 1452.71,1455.87,1265.25,1045.81 --camera_distortion_coeffs -0.0403,0.087,0.002,0.005 -adv

Result:
image

The Environment : UBuntu 20.04 Ros:Noetic

Input :
rosrun direct_visual_lidar_calibration preprocess --data_path /home/zlzk/catkin_ws/ouster --dst_path /home/zlzk/catkin_ws/ouster_preprocessed --image_topic /image --points_topic /points --camera_model plumb_bob --camera_intrinsics 1452.71,1455.87,1265.25,1045.81 --camera_distortion_coeffs -0.0403,0.087,0.002,0.005 -adv

Result:
image

I have also seen related issues. Using the dataset they provided and switching to the vlp16 branch(https://github.com/koide3/direct_visual_lidar_calibration/issues/13), the same problem still exists.

image

error while running direct_visual_lidar_calibration preprocess

If the issue is about calibration procedure and results:

Environment:
OS: ubuntu 20.04
ROS Version: noetic

I am having an issue while running preprocess command
rosrun direct_visual_lidar_calibration preprocess data/ process/ -a -d -v

Error:

data_path: /lidar-cam-calib/data/
dst_path : /lidar-cam-calib/process/
input_bags:

  • /lidar-cam-calib/data/_2023-04-13-17-49-57_0.bag
  • /lidar-cam-calib/data/_2023-04-13-17-50-27_1.bag
  • /lidar-cam-calib/data/_2023-04-13-17-50-57_2.bag
  • /lidar-cam-calib/data/_2023-04-13-17-51-28_3.bag
  • /lidar-cam-calib/data/_2023-04-13-17-51-57_4.bag
  • /lidar-cam-calib/data/_2023-04-13-17-52-27_5.bag
    topics:
  • camera_info: /lidar-cam-calib/data/_2023-04-13-17-52-27_5.bag
    topics:
  • image :cam0/image_raw
  • points :/vlp16_1/velodyne_points
    intensity_channel: intensity
    terminate called after throwing an instance of 'rosbag::BagIOException'
    what(): Error opening file: cam0/image_raw
    Aborted (core dumped)

Error from using docker environment

There is an error when i use docker image,the log fellow this:
glfw error 65544: X11: The DISPLAY environment variable is missing
failed to initialize GLFW

ROS1 commands to run the project

I have installed the repo based on the guidelines provided in
https://koide3.github.io/direct_visual_lidar_calibration/

I have collected the rosbag data and when I try to run my data. All I see is the ros2 commands.

It would be really helpful to include the ros1 commands in the documentation.

I tried installing ros2 along with ros1 but I was facing issues with sourcing and building the project. So I request for the ros1 commands.

double free or corruption (out)

Hello, my code has been compiled successfully, but there was a double free or corruption (out) issue during rosrun. May I know the reason for this and how to solve it? I hope to receive your reply. Thank you very much.

[Error] glfw error 65543 in initial_guess_manual

Thank you for your open source in github. It helps me a lot in my study! But I meet some error today. Hope you can help me :)

Describe the bug
When I use initial_guess_manual , there is a wrong "glfw error 65543: GLX: Failed to create context: GLXBadFBConfig". I don't know how to solve it.

Screenshots and sample data

pi@ubuntu:~/catkin_ws$ rosrun direct_visual_lidar_calibration initial_guess_manual /home/pi/catkin_ws/ouster_pinhole_preprocessed
camera_model:plumb_bob
intrinsics :4
distortion :5
loading /home/pi/catkin_ws/ouster_pinhole_preprocessed/camera0-camera6-rslidar-chessboard_2023-07-26-17-35-23.bag.(png|ply)
image_size:1280x720
camera_fov:49.3068[deg]
glfw error 65543: GLX: Failed to create context: GLXBadFBConfig
Segmentation fault (core dumped)

Environment:

  • OS: Ubuntu 20.04
  • ROS version: ROS1 noetic
  • Virtual Machine

Addition
There is no error in preprocess step. I get the correspond output.

Coordinate frame between Lidar and Camera

Hi, thanks for your great work! I'm still a newbie for using lidar and camera. So, I just want to ask about the coordinate frame of both sensor. Since my LiDAR (ouster) Coordinate System:

  • X-axis: Forward (towards the direction the vehicle moves)
  • Y-axis: Left
  • Z-axis: Upwards

Camera Coordinate System (Azure Kinect, OpenCV-based):

  • X-axis: Right (pointing to the right of the image)
  • Y-axis: Down (pointing downwards in the image)
  • Z-axis: Forward (out of the camera lens)

Then does the calibration result, the T_lidar_camera already overcome this change? Or should I flip it by myself? Thanks!

Segmentation fault ouster bag

When I try to preprocess the data ouster for the Ouster-camera calibration it goes wrong. I followed the instructions (https://koide3.github.io/direct_visual_lidar_calibration/example/) step by step, but i get this error when i tried to do the preprocess :

ros2 run direct_visual_lidar_calibration preprocess ouster ouster_preprocessed -adv
data_path: ouster
dst_path : ouster_preprocessed
[INFO] [1694810226.604455129] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_26_51/rosbag2_2023_03_28-16_26_51_0.db3' for READ_ONLY.
[INFO] [1694810226.609745481] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
input_bags:
- ouster/rosbag2_2023_03_28-16_26_51
- ouster/rosbag2_2023_03_28-16_25_54
[INFO] [1694810226.615058493] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
topics in ouster/rosbag2_2023_03_28-16_25_54:
- /camera_info : sensor_msgs/msg/CameraInfo
- /image : sensor_msgs/msg/Image
- /points : sensor_msgs/msg/PointCloud2
selected topics:
- camera_info: /camera_info
- image      : /image
- points     : /points
[INFO] [1694810226.620237488] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
intensity_channel: reflectivity
[INFO] [1694810226.664490737] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
try to get the camera model automatically
[INFO] [1694810226.755682896] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
camera_model: plumb_bob
image_size  : 2448 2048
intrinsics  : 1454.66 1455.21 1229.26 1010.66
dist_coeffs :   -0.0507013     0.111236 -0.000881838  0.000141199   -0.0588915
processing images and points (num_threads_per_bag=12)
start processing ouster/rosbag2_2023_03_28-16_25_54
[INFO] [1694810226.760323450] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
[INFO] [1694810226.869450709] [rosbag2_storage]: Opened database 'ouster/rosbag2_2023_03_28-16_25_54/rosbag2_2023_03_28-16_25_54_0.db3' for READ_ONLY.
[ros2run]: Segmentation fault

Environment:

  • OS: Ubuntu 22.04
  • ROS version: ROS2 Humble

About data collection

Hi, thanks for sharing such wonderful works. I am curious about whether it is necessary to move the lidar up and down during collection. I mean if it has been equipped on the robot or vehicle. in which it can only move forward, backward, or spin. will it be useful under the condition? Thanks.

An error accured when try to calibrate kitti bag.

If the issue is about calibration procedure and results:

Describe the bug
This work is quite wonderful which makes me want to validate in kitti dataset. However, an error accured when I run the prepropress command.

图片

To Reproduce
Steps to reproduce the behavior:

  1. download kitti dataset: https://drive.google.com/drive/folders/1uFpFjxIBtGPnJbl5NnbUPtcEyx-cpufU?usp=drive_link
    I have use kitti2bag to transform the kitti data into a rosbag.
  2. then manually calibrate the bag
    this is the command I used:
    '''
    rosrun direct_visual_lidar_calibration preprocess kitti_bag kitti_bag_preprocessed
    --image_topic /kitti/camera_color_left/image_raw
    --points_topic /kitti/velo/pointcloud
    --camera_model plumb_bob
    --camera_intrinsics 767.5834,768.4421,200.9368,130.4244
    --camera_distortion_coeffs -0.0059,-2.4245e-04,-0.4068,0.5534,-8.4002

'''
3. another problem, if I want to calibrate based on a pair of lidar point cloud and image, what should I do. I found that this code need to record at least 2 frames of lidar and camera message for calibrate.

I find a bug when using find_matches_superglue.py

I think the bug might be caused for the version of matplotlib
When I was using find_matches_superglue.py to automatic initial guess, there raises a ValueError: "Colormap turbo is not recognized. Possible values are: Accent, Accent_r, blabla..." in line 136 in direct_visual_lidar_calibration/scripts/find_matches_superglue.py.
I use matplotlib 3.1.2, could you like to give me your matplotlib version to check the bug can be solved or not?

catkin_make error

My environment:Ubuntu20.04,ROS1,gtsam4.0.2,ceres1.14.0;When I used catkin_make, I encountered the following errors:

Base path: /home/doongli/calib_ws
Source space: /home/doongli/calib_ws/src
Build space: /home/doongli/calib_ws/build
Devel space: /home/doongli/calib_ws/devel
Install space: /home/doongli/calib_ws/install

Running command: "make cmake_check_build_system" in "/home/doongli/calib_ws/build"

Running command: "make -j36 -l36" in "/home/doongli/calib_ws/build"

[ 3%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o
[ 6%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o
[ 9%] Building CXX object direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/calib/visual_camera_calibration.cpp.o
In file included from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/common/estimate_pose.cpp:9:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/thirdparty/Sophus/sophus/ceres_manifold.hpp:3:10: fatal error: ceres/manifold.h: No such file or directory
3 | #include <ceres/manifold.h>
| ^~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:154: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/calib/visual_camera_calibration.cpp:10:10: fatal error: ceres/autodiff_first_order_function.h: No such file or directory
10 | #include <ceres/autodiff_first_order_function.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:284: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/calib/visual_camera_calibration.cpp.o] Error 1
In file included from /usr/local/include/gtsam/base/types.h:31,
from /usr/local/include/gtsam/global_includes.h:22,
from /usr/local/include/gtsam/base/Vector.h:27,
from /usr/local/include/gtsam/base/Matrix.h:26,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/tbb/task_scheduler_init.h:21:154: note: #pragma message: TBB Warning: tbb/task_scheduler_init.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.
21 | #pragma message("TBB Warning: tbb/task_scheduler_init.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.")
| ^
In file included from /usr/local/include/gtsam/geometry/Unit3.h:36,
from /usr/local/include/gtsam/geometry/Rot3.h:25,
from /usr/local/include/gtsam/geometry/Pose3.h:24,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/tbb/mutex.h:21:140: note: #pragma message: TBB Warning: tbb/mutex.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.
21 | #pragma message("TBB Warning: tbb/mutex.h is deprecated. For details, please see Deprecated Features appendix in the TBB reference manual.")
| ^
In file included from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor.hpp:61,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:11:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor_impl.hpp: In member function ‘virtual boost::shared_ptrgtsam::GaussianFactor vlcal::IntegratedCT_GICPFactor_<TargetFrame, SourceFrame>::linearize(const gtsam::Values&) const’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/common/integrated_ct_gicp_factor_impl.hpp:104:69: error: ‘Hat’ is not a member of ‘gtsam::SO3’
104 | H_transed_pose.block<3, 3>(0, 0) = pose.linear() * -gtsam::SO3::Hat(source_pt.template head<3>());
| ^~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘virtual void vlcal::DynamicPointCloudIntegrator::insert_points(const ConstPtr&)’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:31: error: ‘PriorFactor’ is not a member of ‘gtsam’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^~~~~~~~~~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:9: error: parse error in template argument list
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:55: error: no match for ‘operator>’ (operand types are ‘’ and ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’})
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| gtsam::noiseModel::Isotropic::shared_ptr {aka boost::shared_ptrgtsam::noiseModel::Isotropic}
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:43:6: note: candidate: ‘template bool boost::operator>(const boost::optional&, const boost::optional&)’
43 | bool operator > ( optional const& x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:43:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:77:6: note: candidate: ‘bool boost::operator>(const boost::optional&, const T&) [with T = boost::shared_ptrgtsam::noiseModel::Isotropic]’
77 | bool operator > ( optional const& x, T const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:77:38: note: no known conversion for argument 1 from ‘’ to ‘const boost::optional<boost::shared_ptrgtsam::noiseModel::Isotropic >&’
77 | bool operator > ( optional const& x, T const& y )
| ~~~~~~~~~~~~~~~~~~~^
/usr/include/boost/optional/detail/optional_relops.hpp:111:6: note: candidate: ‘template bool boost::operator>(const T&, const boost::optional&)’
111 | bool operator > ( T const& x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:111:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:146:6: note: candidate: ‘template bool boost::operator>(const boost::optional&, boost::none_t)’
146 | bool operator > ( optional const& x, none_t y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:146:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: couldn’t deduce template parameter ‘T’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/optional/optional.hpp:1597,
from /usr/include/boost/optional.hpp:15,
from /usr/local/include/gtsam/base/OptionalJacobian.h:25,
from /usr/local/include/gtsam/base/Matrix.h:25,
from /usr/local/include/gtsam/base/Manifold.h:22,
from /usr/local/include/gtsam/geometry/BearingRange.h:21,
from /usr/local/include/gtsam/geometry/Pose3.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/include/vlcal/preprocess/dynamic_point_cloud_integrator.hpp:8,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:1:
/usr/include/boost/optional/detail/optional_relops.hpp:180:6: note: candidate: ‘template bool boost::operator>(boost::none_t, const boost::optional&)’
180 | bool operator > ( none_t x, optional const& y )
| ^~~~~~~~
/usr/include/boost/optional/detail/optional_relops.hpp:180:6: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::optional’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:625:9: note: candidate: ‘template<class IteratorT, class ForwardRange> typename boost::enable_if<boost::mpl::not_<boost::is_base_and_derived<boost::iterator_range_detail::iterator_range_tag, ForwardRange> >, bool>::type boost::operator>(const ForwardRange&, const boost::iterator_range&)’
625 | operator>( const ForwardRange& l, const iterator_range& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:625:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::iterator_range’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:714:9: note: candidate: ‘template<class Iterator1T, class Iterator2T> bool boost::operator>(const boost::iterator_range&, const boost::iterator_range&)’
714 | operator>( const iterator_range& l, const iterator_range& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:714:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::iterator_range’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/ptr_container/ptr_map_adapter.hpp:24,
from /usr/include/boost/ptr_container/detail/serialize_ptr_map_adapter.hpp:9,
from /usr/include/boost/ptr_container/serialize_ptr_map.hpp:9,
from /usr/local/include/gtsam/nonlinear/Values.h:41,
from /usr/local/include/gtsam/nonlinear/NonlinearFactor.h:23,
from /usr/local/include/gtsam/slam/BetweenFactor.h:22,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:3:
/usr/include/boost/range/iterator_range_core.hpp:724:9: note: candidate: ‘template<class IteratorT, class ForwardRange> typename boost::enable_if<boost::mpl::not_<boost::is_base_and_derived<boost::iterator_range_detail::iterator_range_tag, ForwardRange> >, bool>::type boost::operator>(const boost::iterator_range&, const ForwardRange&)’
724 | operator>( const iterator_range& l, const ForwardRange& r )
| ^~~~~~~~
/usr/include/boost/range/iterator_range_core.hpp:724:9: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: couldn’t deduce template parameter ‘IteratorT’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
In file included from /usr/include/boost/variant/variant.hpp:42,
from /usr/include/boost/variant.hpp:17,
from /usr/local/include/gtsam/inference/EliminateableFactorGraph.h:23,
from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:25,
from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
from /usr/local/include/gtsam/nonlinear/LevenbergMarquardtOptimizer.h:23,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:6:
/usr/include/boost/blank.hpp:83:13: note: candidate: ‘bool boost::operator>(const boost::blank&, const boost::blank&)’
83 | inline bool operator>(const blank&, const blank&)
| ^~~~~~~~
/usr/include/boost/blank.hpp:83:23: note: no known conversion for argument 1 from ‘’ to ‘const boost::blank&’
83 | inline bool operator>(const blank&, const blank&)
| ^~~~~~~~~~~~
In file included from /usr/include/boost/lexical_cast/detail/converter_lexical.hpp:50,
from /usr/include/boost/lexical_cast/try_lexical_convert.hpp:44,
from /usr/include/boost/lexical_cast.hpp:32,
from /usr/include/boost/date_time/string_parse_tree.hpp:13,
from /usr/include/boost/date_time/period_parser.hpp:14,
from /usr/include/boost/date_time/date_facet.hpp:23,
from /usr/include/boost/date_time/gregorian/gregorian_io.hpp:16,
from /usr/include/boost/date_time/gregorian/gregorian.hpp:31,
from /usr/include/boost/date_time/posix_time/time_formatters.hpp:12,
from /usr/include/boost/date_time/posix_time/posix_time.hpp:24,
from /usr/local/include/gtsam/nonlinear/LevenbergMarquardtOptimizer.h:26,
from /home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:6:
/usr/include/boost/array.hpp:336:10: note: candidate: ‘template<class T, long unsigned int N> bool boost::operator>(const boost::array<T, sz>&, const boost::array<T, sz>&)’
336 | bool operator> (const array<T,N>& x, const array<T,N>& y) {
| ^~~~~~~~
/usr/include/boost/array.hpp:336:10: note: template argument deduction/substitution failed:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:91:147: note: ‘gtsam::noiseModel::Isotropic::shared_ptr’ {aka ‘boost::shared_ptrgtsam::noiseModel::Isotropic’} is not derived from ‘const boost::array<T, sz>’
91 | graph.emplace_shared<gtsam::PriorFactorgtsam::Pose3>(0, gtsam::Pose3(pred_T_odom_lidar_begin), gtsam::noiseModel::Isotropic::Precision(6, 1e3));
| ^
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘void vlcal::DynamicPointCloudIntegrator::voxelgrid_task()’:
/home/doongli/calib_ws/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:144:43: error: ‘const class gtsam::Pose3’ has no member named ‘interpolateRt’
144 | T_odom_lidar = T_odom_lidar_begin.interpolateRt(T_odom_lidar_end, t);
| ^~~~~~~~~~~~~
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:245: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:990: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j36 -l36" failed

About generating intensity images from point clouds

Hello, thank you for your great work, but as a beginner, I have a question about generating intensity images from point clouds. In paper, you create a virtual camera when get point clouds from livox (Am I understanding it correctly?). Does this mean that I need to know the camera's internal parameters, distortion parameters, radar placement, orientation, and spatial position in advance? I don't think I need external parameters, because my goal is to solve for external parameters. And then use this virtual camera to project the point cloud data onto the image? Does this result in inaccurate projection of the point cloud position onto the image?

Hello, I am using velodyne16, but after data preprocessing, the point cloud becomes disorganized

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.

Environment:

  • OS: [e.g. Ubuntu 20.04]
  • ROS version: [e.g., ROS1 noetic]

Additional context
Add any other context about the problem here.
2023-04-13 10-53-58 的屏幕截图

How to specify camera parameters for preprocessing

Before opening an issue

If the issue is about calibration procedure and results:

Describe the bug
I used ROS1. When I was preprocessing, I saw the notes you provided and needed to specify the parameters of the camera in advance, but when I tried to specify the parameters, I got the following results

IMG_20230704_193631
IMG_20230704_193640
IMG_20230704_193730

What command should I type to do it?

Environment:

  • OS: [e.g. Ubuntu 20.04]
  • ROS version: [e.g., ROS1 noetic]

catkin_make error

[ 77%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_auto
[ 80%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_manual
[ 87%] Built target preprocess
[ 90%] Linking CXX executable /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/preprocess_map
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::update_image()': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:239: undefined reference to cv::imshow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&)'
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::on_mouse(int, int, int, int)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:281: undefined reference to cv::imshow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&)'
/usr/bin/ld: CMakeFiles/initial_guess_manual.dir/src/initial_guess_manual.cpp.o: in function vlcal::InitialGuessManual::InitialGuessManual(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:122: undefined reference to cv::namedWindow(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int)'
/usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_manual.cpp:123: undefined reference to cv::setMouseCallback(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void (*)(int, int, int, int, void*), void*)' collect2: error: ld returned 1 exit status make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/initial_guess_manual.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_manual] Error 1 make[1]: *** [CMakeFiles/Makefile2:504: direct_visual_lidar_calibration/CMakeFiles/initial_guess_manual.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... /usr/bin/ld: CMakeFiles/initial_guess_auto.dir/src/initial_guess_auto.cpp.o: in function vlcal::InitialGuessAuto::read_correspondences(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::shared_ptr<vlcal::Frame const> const&)':
/home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/initial_guess_auto.cpp:63: undefined reference to cv::imread(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)' /usr/bin/ld: CMakeFiles/initial_guess_auto.dir/src/initial_guess_auto.cpp.o: in function cv::Mat::Mat(int, int, int, void*, unsigned long)':
/usr/include/opencv4/opencv2/core/mat.inl.hpp:568: undefined reference to cv::Mat::updateContinuityFlag()' /usr/bin/ld: /usr/include/opencv4/opencv2/core/mat.inl.hpp:548: undefined reference to cv::error(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, char const*, char const*, int)'
collect2: error: ld returned 1 exit status
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/initial_guess_auto.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/initial_guess_auto] Error 1
make[1]: *** [CMakeFiles/Makefile2:855: direct_visual_lidar_calibration/CMakeFiles/initial_guess_auto.dir/all] Error 2
/usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function cv::Mat::Mat(int, int, int, void*, unsigned long)': /usr/include/opencv4/opencv2/core/mat.inl.hpp:568: undefined reference to cv::Mat::updateContinuityFlag()'
/usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function vlcal::PreprocessMap::save_lidar_data(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::shared_ptr<vlcal::Frame const> const&)': /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:210: undefined reference to cv::imwrite(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&, std::vector<int, std::allocator > const&)'
/usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:211: undefined reference to cv::imwrite(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)' /usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function cv::Mat::Mat(int, int, int, void*, unsigned long)':
/usr/include/opencv4/opencv2/core/mat.inl.hpp:548: undefined reference to cv::error(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*, char const*, int)' /usr/bin/ld: CMakeFiles/preprocess_map.dir/src/preprocess_map.cpp.o: in function vlcal::PreprocessMap::run(int, char**)':
/home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:69: undefined reference to cv::imread(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)' /usr/bin/ld: /home/doongli/catkin_ws_4/src/direct_visual_lidar_calibration/src/preprocess_map.cpp:75: undefined reference to cv::imwrite(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, cv::_InputArray const&, std::vector<int, std::allocator > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/preprocess_map.dir/build.make:216: /home/doongli/catkin_ws_4/devel/lib/direct_visual_lidar_calibration/preprocess_map] Error 1
make[1]: *** [CMakeFiles/Makefile2:963: direct_visual_lidar_calibration/CMakeFiles/preprocess_map.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j36 -l36" failed

catkin_make error

hello, please I have this issue, can anyone help me please ?
/home/user/ros/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp: In member function ‘void vlcal::DynamicPointCloudIntegrator::voxelgrid_task()’:
/home/user/ros/src/direct_visual_lidar_calibration/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp:144:47: error: ‘const class gtsam::Pose3’ has no member named ‘interpolateRt’
144 | T_odom_lidar_end = T_odom_lidar_begin.interpolateRt(T_odom_lidar_end, t);
| ^~~~~~~~~~~~~
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:245: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/preprocess/dynamic_point_cloud_integrator.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:6503: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make: *** [Makefile:163: all] Error 2
Invoking "make -j12 -l12" failed

Environment:

  • OS: [Ubuntu 20.04]
  • ROS version: [ROS1 noetic]

an error about fmt?

In this function of Eigen::Isometry3d PoseEstimation::estimate_pose_lsq(),the line "Sophus::SE3d T_camera_lidar = Sophus::SE3d(init_T_camera_lidar.matrix());" has an error of "error: static assertion failed: Cannot format an argument. To make type T formattable provide a formatter specialization:"

When I was creating my own Docker image, the visualization window got stuck.

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug
When I created my own Docker image, I added a project, but when running the visualization window, it got very stuck and couldn't rotate the point cloud. What configuration did you make when creating the Docker image

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots and sample data
If applicable, add screenshots and sample data to reproduce your problem.

Environment:

  • OS: Ubuntu 20.04 docker
  • ROS version:ROS1

Additional context
Add any other context about the problem here.

ROS Noetic catkin_make error

Hi,

This work looks great and I look forward to using it. However, I'm running into the following error when using catkin_make on ROS Noetic:

/home/paul/catkin_ws/src/direct_visual_lidar_calibration/thirdparty/Sophus/sophus/common.hpp:35:10: fatal error: fmt/format.h: No such file or directory
   35 | #include <fmt/format.h>
      |          ^~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/build.make:174: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/src/vlcal/common/estimate_pose.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:9306: direct_visual_lidar_calibration/CMakeFiles/direct_visual_lidar_calibration.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

Thanks

Details On Lidar Image

Hi, I have some questions about the process of the generation of lidar- image. When generating lidar intensities image with lidar fov larger than 150 degrees, why use extrinsic like this:
T_lidar_camera.linear()= (Eigen::AngleAxisd(-M_PI_2, Eigen::Vector3d::UnitX())).toRotationMatrix();
why not use the same extrinsic as the one that lidar fov smaller than 150 degrees.
T_lidar_camera.linear() = (Eigen::AngleAxisd(M_PI_2, Eigen::Vector3d::UnitY()) * Eigen::AngleAxisd(-M_PI_2, Eigen::Vector3d::UnitZ())).toRotationMatrix();
Thanks!

Could not reproduce calibration example with docker

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug
when repeating the instruction with running example using docker for ros1
in console:
Screenshot from 2023-04-24 13-33-59

To Reproduce
Steps to reproduce the behavior:

  1. download dataset
  2. get docker image
  3. run
    docker run -it --rm --net host --gpus all -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/root/.Xauthority -v /home/robot/calibration/input_bags:/tmp/input_bags -v /home/robot/calibration/preprocessed:/tmp/preprocessed koide3/direct_visual_lidar_calibration:noetic rosrun direct_visual_lidar_calibration preprocess -av /tmp/input_bags /tmp/preprocessed

Expected behavior
reproduce calibration example.

Screenshots and sample data

Environment:

  • OS: Ubuntu 20.04
  • ROS version: ROS1 noetic

Additional context

How to set parameters

Thanks for your great research!

I have two questions.

  1. I've heard that even small change can lead to significant differences in sensor calibration.
    When I adjust the parameter robust_kernel_width in function "estimate_rotation_ransac" at "estimate_pose.cpp", result changes significantly. So it seems necessary to determine the correct parameters for different dataset.
    Could you please provide guidance on how to set appropriate parameters like robust_kernel_width or error_thresh_sq?

  2. I've noticed a step that identifies inliers in the "estimate_rotation_ransac" function, but it appears that this information is never utilized. I'm curious why the code estimates the pose using all correspondences rather than just the inliers in the "estimate_pose_lsq" function.

Thanks!

Nvidia Jetson support

Hi,

Thanks for the tool. When pool the docker for Humble, I get the error of architectural mismatch. It would be very beneficial to have a arm64 support. Maybe a guideline for Jetson devices would be an alternative solution.

Lidar-Camera Calibration Issue with Fisheye Lens

Before opening an issue
If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug

I am using a fisheye lens for lidar camera calibration, but I'm encountering issues with the calibration process. Specifically, I am using rectified fisheye images for the camera calibration process. The distortion model I'm using, "equidistant," is throwing an error. I referred to the official pages "http://wiki.ros.org/camera_calibration" and "http://docs.ros.org/en/melodic/api/sensor_msgs/html/namespacesensor__msgs_1_1distortion__models.html#a51077dbd502bdea20f46493798ca807b" for information. According to these resources, for the fisheye lens in the Noetic version, the distortion model should be "equidistant."

I am able to preprocess it, I am facing an issue in Initial guess manual, throwing an error " error: unknown camera model equidistant"
command: rosrun direct_visual_lidar_calibration initial_guess_manual /media/user/fisheye/fish_test_2/process/

Upon reviewing the code, I noticed that the distortion model for the fisheye lens is specified as "FISHEYE." I attempted to replace this model with "equidistant," in calib.json but it did not work as expected. Although I am able to obtain the image for choosing corresponding points, the target lidar points are missing. in the terminal it is showing Camera fov as 0 deg.

image

Environment:

  • OS: [Ubuntu 20.04]
  • ROS version: [ROS1 noetic]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.