Coder Social home page Coder Social logo

pred_fusion's Introduction

ROS Sensor Fusion based Multi-Object Trajectory Prediction

This repository deals with a perception system of Autonomous Driving techniques. In particular, we focused on the object detection, tracking, sensor fusion, and trajectory prediction. We used YOLOv5, PointPillars for the object detection of Camera and LiDAR sensor, respectively. Overall pipeline is as following.

System Prediction Result

Through ROS Rviz, the prediction output is as the videos above.

Prerequisite

  1. Tested in Ubuntu 20.04 (ROS Noetic) & NVIDIA GeForce RTX 3070
  2. Other necessary library is in the requirements.txt

Preparation

0. Clone this repository and move directory

Clone this repository and move your current directory to here.

cd path_to_your_ws
git clone https://github.com/s-duuu/pred_fusion.git
cd pred_fusion

1. Install requirements

Install modules in requirements.txt.

pip install -r requirements.txt

2. Clone PointPillars

Clone the official repository of PointPillars.

git clone https://github.com/zhulf0804/PointPillars.git

3. Clone OpenPCDet

Clone the official repository of OpenPCDet.

git clone https://github.com/open-mmlab/OpenPCDet.git

4. Clone CRAT-Pred

Clone the official repository of CRAT-Pred.

git clone https://github.com/schmidt-ju/crat-pred.git

5. Build package

Build the package in the your workspace.

cd path_to_your_ws
catkin_make (or catkin build)
source ./devel/setup.bash

Execute Launch file & Test

Execute launch file which includes all ROS nodes necessary for the system.

roslaunch fusion_prediction integrated.launch

You can test our system by ROS bagfile. Download the file and play it in another terminal. Rviz will display the result of the system.

cd path_to_bagfile
rosbag play test.bag

Detection Models

1. YOLOv5

We trained YOLOv5s model, which is located in pred_fusion/fusion_prediction/yolo.pt. Since the model was trained with image data extracted from CarMaker simulator, if you need the YOLOv5 model for the real vehicles, it would be better to change the YOLO model. You can train a new model from yolov5 official github.

2. PointPillars

We also trained PointPillars model, which is located in pred_fusion/fusion_prediction/pillars.pth. This model was trained with Kitti dataset, thus you don't need to change the model.

Sensor Fusion

Sensor fusion algorithm is based on Late Fusion algorithm. Algorithm in this repository is based on the bounding box projection. Each 3D bounding box predicted from the PointPillars model is projected onto the image plane. Then, the algorithm determines whether the 2 bounding boxes are for the same object based on IOU.

Object Tracking

Object tracking algorithm is based on the SORT (Simple Online and Realtime Tracking). The algorithm tracks each BEV (Bird's Eye View) Bounding Box. Tracking is based on Kalman Filter, Matching is based on IOU, and Assignment is based on Hungarian algorithm.

Trajectory Prediction

Trajectory is predicted from the CRAT-Pred model. This model was trained with Argoverse dataset, thus you don't need to change the model. The model is located in pred_fusion/fusion_prediction/crat.ckpt.

Contributor

Kim SeongJu, School of Mechanical Engineering, Sungkyunkwan University, South Korea

e-mail: [email protected]

pred_fusion's People

Contributors

s-duuu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Jinhee Kim avatar  avatar  avatar

Watchers

 avatar

Forkers

crushdiao

pred_fusion's Issues

Some personal questions about the model

First of all, thank you very much for replying to me amidst your busy schedule. Secondly, I have the following questions.
1. I have reproduced the code for fusion recognition using the Kitti dataset, and changed the topic. It is possible that the display page did not perform as well as the testbag you provided because the internal and external parameters were not modified. Therefore, I would like to ask if there are any other files that need to be modified besides the parameters in fusion.py for the internal and external parameters
2. If I connect my own camera and LiDAR, I also need to modify the calibration file
3. What are the execution commands for the part of the code for tracking and fusion prediction? After executing the launch file, I can only see the code for fusion recognition, without the process of trace tracking. Can you explain it.
Thank you very much for the help your warehouse has provided me, and I am anxiously looking forward to your early reply.
Best wishes.

how to modify the paths

Hello, I have trained the new YOLOv5 weights (V6.0 version) and the weights of pointpigs. May I know how to modify the paths of these two weights? Can you provide me with a detailed explanation? Thank you very much.
Looking forward to your reply

about change the data

if i want change the data of kitti,how should i do? and i want to use the camera and lidar,what should i do?
if you have free time,please reply me.thank you.

I can't find a place to train pointpilots now.

Hello, good afternoon. I would like to ask, if you want to retrain Pintillars, where should you train? I saw that it seems possible in SRC/SRC, but PRED_ What is the purpose of the fusion folder? I can't find a place to train pointpilots now. Can you help me clarify the path? Thank you very much.

1

          > 首先,非常感谢您在百忙之中回复我。其次,我有以下问题。 **1.** 我使用 Kitti 数据集重现了融合识别的代码,并更改了主题。显示页面的性能可能不如您提供的测试包,因为内部和外部参数未修改。因此,我想问一下除了内部和外部参数 2 的 fusion.py 中的参数之外,还有其他文件需要修改**。**如果我连接自己的相机和激光雷达,我还需要修改校准文件 **3.**用于跟踪和融合预测的代码部分的执行命令是什么?执行启动文件后,我只能看到融合识别的代码,没有跟踪跟踪的过程。你能解释一下吗?非常感谢您的仓库为我提供的帮助,我急切地期待您的早日回复。 愿你安好。

Hello, have you had any success with the kitti dataset? I am currently trying to use kitti as well, but I am having some issues. Could you provide your internal and external parameters about kitti dataset.
Thanks.

Originally posted by @SHIELDgo in #2 (comment)

camera and LiDAR synchronization

Hello, I would like to ask if there is camera and LiDAR synchronization in this project. If so, in which code segment is it reflected? If not, how can the final fusion be determined to be the same frame of data? thanks

rviz does not display or rarely displays 3D detection boxes

Hello, when I am applying my own radar data, rviz does not display or rarely displays 3D detection boxes. Is this related to my data? Where can I modify the code? (I am using Raytheon 16 line LiDAR), the 3D detection frame of the KITTI dataset can be displayed.
Looking forward to your reply, thank you
Best wishes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.