Coder Social home page Coder Social logo

ibrahimhroob / sps Goto Github PK

View Code? Open in Web Editor NEW
7.0 0.0 3.0 79.97 MB

Generalizable Stable Points Segmentation for 3D LiDAR Scan-to-Map Long-Term Localization

License: MIT License

Python 75.14% Makefile 0.42% Dockerfile 1.98% Shell 4.21% CMake 18.26%
deep-learning lidar localization map minkowski-engine point-cloud segmentation slam stable unstable field-robots

sps's Introduction

Generalizable Stable Points Segmentation for 3D LiDAR Scan-to-Map Long-Term Localization

Our method segments stable and unstable points in 3D LiDAR scans exploiting the discrepancy of scan voxels and overlapping map voxels (highlighted as submap voxels). We showcase two LiDAR scans captured during separate localization sessions within an outdoor vineyard. The scan on the left depicts the vineyard state in April, while the scan on the right reveals environmental changes in plant growth in June

Click here for qualitative results!
sps.mp4

Our stable points segmentation prediction for three datasets. The stable points are depicted in black, while the unstable points are represented in red.

Building the Docker image

We provide a Dockerfile and a docker-compose.yaml to run all docker commands.

IMPORTANT To have GPU access during the build stage, make nvidia the default runtime in /etc/docker/daemon.json:

```yaml
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        } 
    },
    "default-runtime": "nvidia" 
}
```
Save the file and run ```sudo systemctl restart docker``` to restart docker.

To build the image, simply type the following in the terminal:

bash build_docker.sh

Once the build process finishes, initiate the Docker container in detached mode using Docker Compose from the project directory:

docker-compose up -d # or [docker compose up -d] for older versions

Usage Instructions

Training

To train the model with the parameters specified in config/config.yaml, follow these steps:

  1. Export the path to the dataset (This step may be necessary before initiating the container):

    export DATA=path/to/dataset
  2. Initiate training by executing the following command from within the container:

    python scripts/train.py

Segmentation Metrics

To evaluate the segmentation metrics for a specific sequence:

python scripts/predict.py -seq <SEQ ID>

This command will generate reports for the following metrics:

  • uIoU (unstable points IoU)
  • Precision
  • Recall
  • F1 score

Localization

Install and build the following packages in your catkin_ws:

cd </path/to/catkin_ws>/src
git clone https://github.com/koide3/ndt_omp
git clone https://github.com/SMRT-AIST/fast_gicp --recursive 
git clone https://github.com/koide3/hdl_global_localization 
git clone --branch SPS https://github.com/ibrahimhroob/hdl_localization.git
cd ..
catkin build
source devel/setup.bash

Then, the localization experiment can be run using a single command:

bash exp_pipeline/loc_exp_general.bash

In order to calculate the localization metrics please install evo library

Data

You can download the post-processed and labelled BLT dataset and the parking lot of NCLT dataset from the proveded links.

The weights of our pre-trained model can be downloaded as well.

Here the general structure of the dataset:

DATASET/
├── maps
│   ├── base_map.asc
│   ├── base_map.asc.npy
│   └── base_map.pcd
└── sequence
    ├── SEQ
    │   ├── map_transform
    │   ├── poses
    |   |   ├── 0.txt
    |   |   └── ...
    │   └── scans
    |       ├── 0.npy
    |       └── ...
    |
    └── ...

Publication

If you use our code in your academic work, please cite the corresponding paper:

@article{hroob2024ral,
  author = {I. Hroob* and B. Mersch* and C. Stachniss and M. Hanheide},
  title = {{Generalizable Stable Points Segmentation for 3D LiDAR Scan-to-Map Long-Term Localization}},
  journal = {IEEE Robotics and Automation Letters (RA-L)},
  volume = {9},
  number = {4},
  pages = {3546-3553},
  year = {2024},
  doi = {10.1109/LRA.2024.3368236},
}

Acknowledgments

This implementation is inspired by 4DMOS.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

sps's People

Contributors

ibrahimhroob avatar benemer avatar chen-xieyuanli avatar

Stargazers

Daniel Casado Herraez avatar Mays Al-reem avatar  avatar ABDALKARIM MOHTASIB avatar Ashraf Alshareef avatar  avatar Marc Hanheide avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.