watonomous / wato_monorepo Goto Github PK
View Code? Open in Web Editor NEWDockerized ROS2 stack for the WATonomous Autonomous Driving Software Pipeline
License: Apache License 2.0
Dockerized ROS2 stack for the WATonomous Autonomous Driving Software Pipeline
License: Apache License 2.0
We do this with semantic segmentation and lane detection.
Run inference with BEVFusion. https://github.com/WATonomous/Perception-Research/tree/main/src/bevfusion
Obstacle.msg: https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/blob/develop/src/ros_msgs/common_msgs/msg/Obstacle.msg
/lidar
+ /camera
topic/detections_3d
Another person will also need to help out with creating visualization tools, annotating the image directly and visualizing that.
Instead of our three nodes for camera detection, we want one unified node that outputs one message with detections for resource efficiency, and temporal synchronization, or we could have a post processing node that applies a time synchronizer using http://wiki.ros.org/message_filters
June 13 2024
June 13 2024
Port over the camera object detection node for https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/tree/develop/src/camera_detection into our camera_object_detection node.
Dockerfile: https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/blob/develop/docker/camera_detection/Dockerfile
They make the distinct between left and right camera, you can ignore that for now. We will use the same format for Obstacle for now.
/camera
topic/detections_2d
, Obstacle.msg is defined hereAnother person will also need to help out with creating visualization tools, annotating the image directly and visualizing that.
Just for consistency with traffic_signs
June 13 2024
To be consistent across monorepo, and good practice.
For example:
https://github.com/WATonomous/wato_monorepo/blob/main/src/perception/camera_object_detection/config/eve_config.yaml
Port over the lidar object detection node for https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/tree/perp/add-pp/src/lidar_cuda into our lidar_object_detection node. Original repo is here: https://github.com/open-mmlab/mmdetection3d
Obstacle.msg: https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/blob/develop/src/ros_msgs/common_msgs/msg/Obstacle.msg
/lidar
topic/detections_3d
Another person will also need to help out with creating visualization tools, annotating the image directly and visualizing that.
LiDAR velocity estimation which is redundancy for radar velocity detection
June 13 2024
because too many messages, I think ZED also does it this way
Bags can be found in /mnt/wato-drive/rosbags
You can also try this repo to start up a server and see rosbags: https://git.uwaterloo.ca/WATonomous/rosbag-server
Steps:
pip install rosbags
command. Make sure everything is mounted properly in the docker-compose.yaml
so the rosbags generated on the container can be accessed outside the container./mnt/wato-drive2/ros2bags
bag_bridge
profile for monorepo_v2, essentially a port from https://git.uwaterloo.ca/WATonomous/wato_monorepo/-/blob/develop/profiles/docker-compose.bag_bridge.yaml, but we are using ROS2 now, and the mount paths are different (should be like this /mnt/wato-drive2/ros2bags:/bags
)In the end, the following command should work:
watod2 bag_bridge ros2 bag play -r 1.0 /path/to/bag
On line 34, I need to add a apt update
in front , https://github.com/WATonomous/wato_monorepo/blob/8a82913a92717fb9b77ccc211830eb999179edc2/docker/interfacing/sensor_interfacing/sensor_interfacing.Dockerfile
else I get the following error
The reference on line 24 doesn't include an apt update.
The image_size
parameter name is misleading because it doesn't actually represent the resolution of the camera, but rather the resolution of the image that is fed to the model.
We need 3D object tracks to send to world modelling
Potential Implementation:
https://github.com/ZHOUYI1023/awesome-radar-perception?tab=readme-ov-file#Velocity-Estimation
June 13 2024
Old ros2 node for reference: WATonomous/deepracer_ws#2
June 13 2024
- Moving Forward:
We don't care about object clustering here, we care about velocity detections.
Near scan and far scan filtering from radar pc
June 13 2024
Using Polymath driver
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.