Coder Social home page Coder Social logo

calibanything's Introduction

CalibAnything

This package provides an automatic and target-less LiDAR-camera extrinsic calibration method using Segment Anything Model. The related paper is Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything. For more calibration codes, please refer to the link SensorsCalibration.

Prerequisites

  • pcl 1.10
  • opencv
  • eigen 3

Compile

git clone https://github.com/OpenCalib/CalibAnything.git
cd CalibAnything
# mkdir build
mkdir -p build && cd build
# build
cmake .. && make

Run Example

We provide examples of two dataset. You can download the processed data at Google Drive or BaiduNetDisk:

# baidunetdisk
Link: https://pan.baidu.com/s/1qAt7nYw5hYoJ1qrH0JosaQ?pwd=417d 
Code: 417d

Run the command:

cd CalibAnything
./bin/run_lidar2camera ./data/kitti/calib.json # kitti dataset
./bin/run_lidar2camera ./data/nuscenes/calib.json # nuscenes dataset

Test your own data

Data collection

  • Several pairs of time synchronized RGB images and LiDAR point cloud (intensity is needed). One pair of data can also be used to calibrate, but the results may be ubstable.
  • The intrinsic of the camera and the initial guess of the extrinsic.

Preprocessing

Generate masks

Follow the instructions in Segment Anything and generate masks of your image.

  1. First download a model checkpoint. You can choose vit-l.

  2. Install SAM

# environment: python>=3.8, pytorch>=1.7, torchvision>=0.8

git clone [email protected]:facebookresearch/segment-anything.git
cd segment-anything; pip install -e .
pip install opencv-python pycocotools matplotlib onnxruntime onnx
  1. Run
python scripts/amg.py --checkpoint <path/to/checkpoint> --model-type <model_type> --input <image_or_folder> --output <path/to/output>

# example(recommend parameter)
python scripts/amg.py --checkpoint sam_vit_l_0b3195.pth --model-type vit_l --input ./data/kitti/000000/images/  --output ./data/kitti/000000/masks/ --stability-score-thresh 0.9 --box-nms-thresh 0.5 --stability-score-offset 0.9

Data folder

The hierarchy of your folders should be formed as:

YOUR_DATA_FOLDER
├─calib.json
├─pc
|   ├─000000.pcd
|   ├─000001.pcd
|   ├─...
├─images
|   ├─000000.png
|   ├─000001.png
|   ├─...
├─masks
|   ├─000000
|   |   ├─000.png
|   |   ├─001.png
|   |   ├─...
|   ├─000001
|   ├─...

Processed masks

For large masks, we only use part of it near the edge.

python processed_mask.py -i <YOUR_DATA_FOLDER>/masks/ -o <YOUR_DATA_FOLDER>/processed_masks/

Edit the json file

Content description
  • cam_K: camera intrinsic matrix
  • cam_dist: camera distortion coefficient. [k1, k2, p1, p2, p3, ...], use the same order as opencv
  • T_lidar_to_cam: initial guess of the extrinsic
  • T_lidar_to_cam_gt: ground-truth of the extrinsic (Used to calculate error. If not provided, set "available" to false)
  • img_folder: the path to images
  • mask_folder: the path to masks
  • pc_folder: the path to point cloud
  • img_format: the suffix of the image
  • pc_format: the suffix of the point cloud (support pcd or kitti bin)
  • file_name: the name of the input images and point cloud
  • min_plane_point_num: the minimum number of point in plane extraction
  • cluster_tolerance: the spatial cluster tolerance in euclidean cluster (set larger if the point cloud is sparse, such as the 32-beam LiDAR)
  • search_num: the number of search times
  • search_range: the search range for rotation and translation
  • point_range: the approximate height range of the point cloud projected onto the image (the top of the image is 0.0 and the bottom of the image is 1.0)
  • down_sample: the point cloud downsample voxel size (if don't need downsample, set the "is_valid" to false)
  • thread: the number of thread to reduce calibration time

Calibration

./bin/run_lidar2camera <path-to-json-file>

Output

  • initial projection: init_proj.png, init_proj_seg.png
  • gt projection: gt_proj.png, gt_proj_seg.png
  • refined projection: refined_proj.png, refined_proj_seg.png
  • refined extrinsic: extrinsic.txt

Citation

If you find this project useful in your research, please consider cite:

@misc{luo2023calibanything,
      title={Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything}, 
      author={Zhaotong Luo and Guohang Yan and Yikang Li},
      year={2023},
      eprint={2306.02656},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

calibanything's People

Contributors

sisyphus-99 avatar xiaokn avatar

Stargazers

Yihao Wang avatar  avatar Syed Zeeshan Ahmed avatar jimazeyu avatar  avatar  avatar Aryan Singh avatar  avatar Liam Mitchell avatar wancheng Shen avatar Kwonyoung Ryu avatar  avatar  avatar Jeho Lee avatar  avatar Enes Cingöz avatar Zihong Yan avatar Yi Xie avatar 曹明伟,Mingwei Cao avatar zzh avatar  avatar Giseop Kim avatar Sarvesh Thakur avatar  avatar  avatar  avatar Hsiu Yu Lin avatar XianyanLin avatar Lufan Ma avatar Ting Han avatar  avatar Qi avatar mmc avatar  avatar Richard Kelley avatar Jeff Carpenter avatar  avatar grdiv avatar LucianZhong avatar  avatar XingChen avatar ClraneceLuo avatar saito ren avatar Naveen Balaji avatar syo093c avatar Nguyen Van Thanh avatar Daisuke Nishimatsu avatar  avatar Martin Valgur avatar Burak Aksoy avatar Yang Fu avatar Jun-Jun Wan avatar  avatar Adam Erickson avatar Luka  avatar Qi Fang avatar  avatar 然 avatar  avatar Hyunggi Chang avatar hak-kyoung.kim avatar  avatar jgcao avatar warm tan avatar  avatar Try be better avatar  avatar Justy avatar tiandaji avatar max.zhong avatar Rongkun Yang avatar  avatar fanx042 avatar  avatar nosky avatar Cheng Zhu avatar Wang Pengfei avatar  avatar  avatar EmilyM avatar  avatar Li HaoRan avatar Harun Yesevi avatar  avatar Shuhang Zhang avatar Wangchao_Yu avatar  avatar Gigalomanicx avatar  avatar  avatar MaybeShewill-CV avatar Xiaodong Huang avatar  avatar  avatar  avatar YZ avatar Fan Shixiong avatar Hari Shankar avatar Matt Shaffer avatar  avatar

Watchers

Charlie Wang avatar LeiZHang avatar ChengJJ avatar Neo Zhang avatar hiyyg avatar Matt Shaffer avatar

calibanything's Issues

Segmentation Fault(core dumped)

Once i try test data, it gives "Segmentation fault(core dumped)" error. After find the line with problem by typing cout, the line is;
114-Segment_pc(pc_filtered, normals, seg_indices); --calibration.cpp
ıt does not segment filtered_pc somehow, interesting side of this error is that; i saved filtered_pc as .pcd file and i processed it in different scprits in same way then there is no any problem however once it is processed in calibration.cpp, it gives error. What can be the problem here?

Using curvature instead of normal in the code

Hi, I appreciate all this effort in developing this awesome calibration tool. I have a question about the normal you adopt in the source code.

After diving into the code, I notice that you commented out the original normal properties and calculations and replaced them with curvature.

Is there any reason for this shift such as performance or accuracy?

Thank you in advance.

code is not successfully running for the shared data

Hello,
Thanks for sharing calibration file. after proper build when I test for kitti data, I am getting below failed message. could you please check.

Reading json file complete!
----------Start processing data----------
Processing data 1:
Point cloud num: 31499
run_lidar2camera: /usr/include/pcl-1.8/pcl/octree/octree_iterator.h:287: pcl::octree::OctreeIteratorBase::LeafContainer& pcl::octree::OctreeIteratorBase::getLeafContainer() [with OctreeT = pcl::octree::OctreeBase<pcl::octree::OctreeContainerPointIndices, pcl::octree::OctreeContainerEmpty>; pcl::octree::OctreeIteratorBase::LeafContainer = pcl::octree::OctreeContainerPointIndices]: Assertion `this->isLeafNode()' failed.
Aborted

打开dir失败问题

我已经成功编译 但是运行../run_lidar2camera ./data/ours/1 出现错误
image
image
opencv是3.4

Suggestion - Integrate MobileSAM into the pipeline for lightweight and faster inference

Reference: https://github.com/ChaoningZhang/MobileSAM

Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.

MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:

image

image

Best Wishes,

Qiao

run_lidar2camera core dumped!!

I run command below, then get core dumped error information.

run_lidar2camera /home/jiangziben/0jzb/CodeProject/CalibAnything/data/kitti/calib.json

error information shows below

run_lidar2camera: /usr/local/include/eigen3/Eigen/src/Core/DenseStorage.h:128: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 32>::plain_array() [with T = float; int Size = 16; int MatrixOrArrayOptions = 0]: Assertion `(internal::UIntPtr(eigen_unaligned_array_assert_workaround_gcc47(array)) & (31)) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.

what's wrong with my code?
thanks!

T in calib.txt

Hello,

Thinks for your excellent work. I noticed that there were 12 elements in T matrix in calib.txt file. Are these elements the 3 x 4 Transform matrix concatenated horizontally or vertically? Thanks.

How to solve the following problem?

In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from /usr/local/include/opencv2/core/eigen.hpp:64,
from /home/mi/test/CalibAnything/include/utility.hpp:19,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:254:30: error: redeclared with 1 template parameter
254 | template struct array_size;
| ^~~~~~~~~~
In file included from /usr/local/include/eigen3/Eigen/Core:162,
from /home/mi/test/CalibAnything/include/utility.hpp:18,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/local/include/eigen3/Eigen/src/Core/util/Meta.h:445:55: note: previous declaration ‘template<class T, class EnableIf> struct Eigen::internal::array_size’ used 2 template parameters
445 | template<typename T, typename EnableIf = void> struct array_size {
| ^~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from /usr/local/include/opencv2/core/eigen.hpp:64,
from /home/mi/test/CalibAnything/include/utility.hpp:19,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:255:41: error: redefinition of ‘struct Eigen::internal::array_size<const std::array<_Tp, _Nm> >’
255 | template<class T, std::size_t N> struct array_size<const std::array<T,N> > {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/eigen3/Eigen/Core:162,
from /home/mi/test/CalibAnything/include/utility.hpp:18,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/local/include/eigen3/Eigen/src/Core/util/Meta.h:461:44: note: previous definition of ‘struct Eigen::internal::array_size<const std::array<_Tp, _Nm> >’
461 | template<typename T, std::size_t N> struct array_size<const std::array<T,N> > {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from /usr/local/include/opencv2/core/eigen.hpp:64,
from /home/mi/test/CalibAnything/include/utility.hpp:19,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:258:30: error: redeclared with 1 template parameter
258 | template struct array_size;
| ^~~~~~~~~~
In file included from /usr/local/include/eigen3/Eigen/Core:162,
from /home/mi/test/CalibAnything/include/utility.hpp:18,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/local/include/eigen3/Eigen/src/Core/util/Meta.h:445:55: note: previous declaration ‘template<class T, class EnableIf> struct Eigen::internal::array_size’ used 2 template parameters
445 | template<typename T, typename EnableIf = void> struct array_size {
| ^~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from /usr/local/include/opencv2/core/eigen.hpp:64,
from /home/mi/test/CalibAnything/include/utility.hpp:19,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:259:41: error: redefinition of ‘struct Eigen::internal::array_size<std::array<_Tp, _Nm> >’
259 | template<class T, std::size_t N> struct array_size<std::array<T,N> > {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/include/eigen3/Eigen/Core:162,
from /home/mi/test/CalibAnything/include/utility.hpp:18,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/local/include/eigen3/Eigen/src/Core/util/Meta.h:464:44: note: previous definition of ‘struct Eigen::internal::array_size<std::array<_Tp, _Nm> >’
464 | template<typename T, std::size_t N> struct array_size<std::array<T,N> > {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:115,
from /usr/local/include/opencv2/core/eigen.hpp:64,
from /home/mi/test/CalibAnything/include/utility.hpp:19,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h: In member function ‘void Eigen::TensorContractionEvaluatorBase::evalGemm(Eigen::TensorContractionEvaluatorBase::Scalar*) const’:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h:466:111: error: wrong number of template arguments (6, should be at least 7)
466 | internal::gemm_pack_lhs<LhsScalar, Index, typename LhsMapper::SubMapper, mr, Traits::LhsProgress, ColMajor> pack_lhs;
| ^
In file included from /usr/local/include/eigen3/Eigen/Core:286,
from /home/mi/test/CalibAnything/include/utility.hpp:18,
from /home/mi/test/CalibAnything/include/calibration.hpp:10,
from /home/mi/test/CalibAnything/src/calibration.cpp:1:
/usr/local/include/eigen3/Eigen/src/Core/util/BlasUtil.h:28:8: note: provided for ‘template<class Scalar, class Index, class DataMapper, int Pack1, int Pack2, class Packet, int StorageOrder, bool Conjugate, bool PanelMode> struct Eigen::internal::gemm_pack_lhs’
28 | struct gemm_pack_lhs;

Point cloud pre-processing

I was looking into the both paper and implementation and got a few comments or clarifications.

-> Why are you using Normal estimation, Inensity normalization and segmentation together? I mean i try to understand the dependency on each other.

-> why are we using conventional/un-supervised based segmentation in case PCLs? Can not we use any of the recent segmentation models similarly the way SAM is being used in case of image?

Thanks, Veeru.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.