Coder Social home page Coder Social logo

zhuhu00 / mms_slam_modify Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 2.0 12.29 MB

针对mms_slam的**,进行学习

License: BSD 3-Clause "New" or "Revised" License

CMake 0.10% Python 52.48% Jupyter Notebook 35.85% Shell 0.09% C++ 7.57% Cuda 3.88% Dockerfile 0.04%

mms_slam_modify's Introduction

MMS-SLAM

Multi-modal semantic SLAM in dynamic environments (Intel Realsense L515 as an example)

Docker

图形界面如果出现错误,需要运行:xhost +.因为默认情况下不允许用户的图形程序显示在当前屏幕上.

docker-compose build # wait for a few minutes

构建成功之后,再使用

docker-compose up # wait for a few minutes

即可进入docker中的环境,并使用catkin_make进行编译,之后进行source devel/setup.bash 之后改动数据集的位置,代码就可以运行了.

This code is modified from SSL_SLAM

Modifier: Wang Han, Nanyang Technological University, Singapore

[Update] AGV dataset is available online! (optional)

1. Solid-State Lidar Sensor Example

1.1 Scene Reconstruction in Dynamic Environments

1.2 Mapping result

1.3 Human & AGV recognition result

1.4 Performance Evaluation

1.5 Detection Result

2. Prerequisites

2.1 Ubuntu and ROS

Ubuntu 64-bit 18.04.

ROS Melodic. ROS Installation

2.2. Ceres Solver

Follow Ceres Installation.

2.3. PCL

Follow PCL Installation.

Tested with 1.8.1

2.4 OctoMap

Follow OctoMap Installation.

$ sudo apt install ros-melodic-octomap*

2.5. Trajectory visualization

For visualization purpose, this package uses hector trajectory sever, you may install the package by

sudo apt-get install ros-melodic-hector-trajectory-server

Alternatively, you may remove the hector trajectory server node if trajectory visualization is not needed

3. Build

3.1 Clone repository:

    cd ~/catkin_ws/src
    git clone https://github.com/wh200720041/mms_slam.git
    cd ..
    catkin_make
    source ~/catkin_ws/devel/setup.bash

chmod python file

roscd mms_slam
cd src
chmod +x solo_node.py

3.2 install mmdetection

create conda environment (you need to install conda first)

conda create -n solo python=3.7 -y
conda activate solo

install PyTorch and torchvision following the official instruction (find your cuda version)

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
conda install -c conda-forge addict rospkg pycocotools

install mmdet 2.0

roscd mms_slam 
cd dependencies/mmdet
python setup.py install

it takes a while (a few minutes to install)

3.3 Download test rosbag and model

You may download our trained model and recorded data if you dont have realsense L515, and by defult the file should be under /home/username/Downloads

put model under mms_slam/config/

cp ~/Downloads/trained_model.pth ~/catkin_ws/src/MMS_SLAM/config/

unzip rosbag file under Download folder

cd ~/Downloads
unzip ~/Downloads/dynamic_warehouse.zip

3.4 Launch ROS

if you would like to create the map at the same time, you can run

    roslaunch mms_slam mms_slam_mapping.launch

if only localization is required, you may refer to run

    roslaunch mms_slam mms_slam.launch

if you would like to test instance segmentation results only , you can run

    roslaunch mms_slam mms_slam_detection.launch

if ModuleNotFoundError: No module named 'alfred', install alfrey-py from pip install

pip install alfred-py

4. Sensor Setup

If you have new Realsense L515 sensor, you may follow the below setup instructions

4.1 L515

4.2 Librealsense

Follow Librealsense Installation

4.3 Realsense_ros

Copy realsense_ros package to your catkin folder

    cd ~/catkin_ws/src
    git clone https://github.com/IntelRealSense/realsense-ros.git
    cd ..
    catkin_make

4.4 Launch ROS with live L515 camera data

In you launch file, uncomment realsense node like this

    <include file="$(find realsense2_camera)/launch/rs_camera.launch">
        <arg name="color_width" value="1280" />
        <arg name="color_height" value="720" />
        <arg name="filters" value="pointcloud" />
    </include>

and comment rosbag play like this

<!-- rosbag
    <node name="bag" pkg="rosbag" type="play" args="- -clock -r 0.4 -d 5 $(env HOME)/Downloads/dynamic_warehouse.bag" />
    <param name="/use_sim_time" value="true" />  
-->

6 Training on AGV & Human dataset

6.1

The human data are collected from COCO dataset train2017.zip(18G) and val_2017.zip(1G) The AGV data are manually collected and labelled Download(1G)

cd ~/Downloads
unzip train2017.zip
unzip val2017.zip
unzip agv_data.zip
mv ~/Downloads/train2017 ~/Downloads/train_data
mv ~/Downloads/val2017 ~/Downloads/train_data
mv ~/Downloads/train_data/agv_data/* ~/Downloads/train_data/train2017

note that it takes a while to unzip

to train a model

roscd mms_slam
cd train
python train.py train_param.py

if you have multiple gpu (say 4 gpus), you can change '1' to your GPU number The trained model is under mms_slam/train/work_dirs/xxx.pth,

7 Acknowlegement

Thanks for A-LOAM and LOAM and LOAM_NOTED and MMDetection and SOLO.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.