Coder Social home page Coder Social logo

lrbo's Introduction

Look at Robot Base Once: Hand-Eye Calibration

Welcome to our project: Look at Robot Base (LRBO): Hand-eye Calibration.

Our idea can be applied for eye-in-hand and eye-to-hand calibration.

Our proposed method has THREE features:

  1. [EASY] For both eye-in-hand and eye-to-hand calibration, it doesn't require any additional calibration objects (No calibration objects), such as a chessboard or something like that.
  2. Hand-eye calibration can be done with only 3D point clouds.
  3. [FAST] The processing time could be fast (<1 sec) and the result is as accurate as other 3D vision-based methods.

Overall, it is a Fully Automatic robot hand-eye calibration based on 3D vision (point cloud).

Requirement

The overall requirement is as follows(Please install them following the official guideline):

  • PRADATOR: a 3D registration for low-overlap point clouds.
  • PVRCNN/PVRCNN++: a 3D detection module for the robot base detection.
  • The utilization for both above can be seen here.

However, to utilize our solution in our project quickly, here contains the guideline where the performance can not be guaranteed but in simple cases, it should be working.

To be honest, the above libraries are generally not 'must required' if you look at the pipeline as shown below. The goal is to find the orientation and direction of the robot base in view of the camera (3D camera), so that we can estimate the transformation between the robot base and the camera directly and easily. To be honest, the above libraries are generally not 'must required' if you look at the pipeline as shown below. The goal is to find the orientation and direction of the robot base in view of the camera (3D camera), so that we can estimate the transformation between the robot base and the camera directly and easily. The other transformation could be solved by the classical forward kinetic model.

Abstract

Hand-eye calibration is a fundamental task in vision-based robotic systems, referring to the calculation of the relative transformation between the camera and the robotic end-effector. It has been studied for many years. However, most methods still rely on external markers or even human assistance. This paper proposes a one-click and fully automatic hand-eye calibration method using 3D point clouds obtained from a 3D camera. Our proposed hand-eye calibration is performed much faster than conventional methods. This is achieved by the learning-based detection and registration of the robot base. In addition, the calibration is performed automatically using only one native calibration object, the robot base, which simplifies the process. Our proposed method is tested for repeatability and accuracy through a series of experiments.

Pipeline

The pipeline for our proposed method is shown below (will be published soon), where eye-in-hand calibration is taken as an example because it is more complex compared to eye-to-hand calibration. The estimation of the transformation matrix between the camera and the robot base is a common problem for them.

The dataset generation for the robot base is elaborated in paper.

Detection of Robot Base

Here we captured more than a series of point clouds from a 3D camera. The raw point clouds are green and the Regions of Interest (ROIs) are blue (they are the same size, so they might be a bit hard to see :).

Registration of Robot Base

These ROIs, extracted from raw point clouds, are aligned with a model of the robot base (which is actually a point cloud as well), the registration result is shown below.

Hand-eye Calibration

We can perform a hand-eye calibration with only a single point cloud. Therefore, we executed hundreds of calibrations (eye-in-hand calibration) during each data acquisition. The result is shown below, where the camera image is displayed close to the end-effector.

Video

Video and tutorial coming soon.

Repeatability Experiment

By measuring the variability in the results, repeatability experiments can provide insight into the stability and robustness of the calibration method and ensure that the results are not merely due to random fluctuations. Therefore, more than 200 hand-eye calibrations were conducted in 300 sec (surprise?!). The results are shown below.

Accuracy Experiment

Two types of testing are performed, called static testing and dynamic testing. Static testing is considered the ground truth.

Method Position error (mm) Rotation error (deg) Runtime Camera Type
Ours X: 1.874 Y:1.092 Z: 0.303 0.391 1.5 + 5 (move) ToF Camera
Ours X: 1.159 Y:0.697 Z: 1.025 0.994 <1 + 5 (move) Structured of Light Camera

Implement Details

PREDATOR, a learning-based point cloud registration framework is applied in our project. In addition, the PV-RCNN++ as a 3D detection module to provide a rough location of the robot base is employed in our method. According to the evaluation result and experiments, their performance is excellent compared with other conventional registration methods (more details in the paper).

We here utilized real-world data to train these framework. The trained model in terms of PREDATOR and PV-RCNN++ can be downloaded as follows:

Papers

If you found it is helpful, please cite:

@misc{li2023look,
    title={Look at Robot Base Once: Hand-Eye Calibration with Point Clouds of Robot    Base Leveraging Learning-Based 3D Vision}, 
    author={Leihui Li, Xingyu Yang, Riwei Wang and Xuping Zhang},
    year={2023},
    eprint={2311.01335},
    archivePrefix={arXiv},
    primaryClass={cs.RO}
}

Contribution

This project is maintained by @Leihui Li, please feel free to contact me if any questions.

lrbo's People

Contributors

leihui6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

lrbo's Issues

Could you write the implementation steps in more detail?

Can you write the implementation steps in more detail? The current instructions are not very accurate to follow, for example,
"b. You can train your own model using (if you have a robot base other than UR3e, UR5 and UR5e)
python main.py configs/train/indoor.yaml"
There is no main.py and configs directory here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.