Coder Social home page Coder Social logo

ekumen-os / beluga-demos Goto Github PK

View Code? Open in Web Editor NEW
4.0 2.0 1.0 1.91 MB

Official demos of the Beluga project

License: Apache License 2.0

CMake 13.64% Python 30.58% C++ 51.46% Dockerfile 3.05% Shell 1.26%
amcl cpp cpp17 localization demo particle particlefilter robotics

beluga-demos's Introduction

Beluga demos!

Here you will find minimal, interactive demos to test the capabilities of the Beluga library.

Note

The Beluga library is not included in this repository. You can find the official beluga project in the https://github.com/Ekumen-OS/beluga repository.

hallway.mp4

Pre-requisites

This guide assumes you are using a Linux system with a working Docker installation. You can check if this is the case by running the following command in a terminal:

docker run hello-world

If you don't have Docker installed, you can find instructions on how to install it here. Instructions for installing Docker on Ubuntu can be found here.

Quick start

Step 1: Clone the repository

The first step is to clone the repository:

git clone [email protected]:Ekumen-OS/beluga-demos.git

Step 2: Run the docker container

Step into the repository directory and start the docker image. The first time you run the docker container, the image will be built first; this build process may take a few minutes.

cd beluga-demos
./docker/run.sh --build

Step 3: Build the demo software code

If this step succeeds you'll be inside the docker container. Now you need to build the demo software stack and run one of the demos.

demo_build

Step 4: Run a demo

Finally, run one of the demos. You can run them by typing their names in the terminal. For instance, the following command will run the lidar_likelihood_model_hallway_demo demo:

lidar_likelihood_model_hallway_demo

You'll find a list of the predefined demos in a table in the section below.

Once the demo starts you'll see three windows pop up (see the screen capture below):

  • Gazebo simulation running the robot and the simulated world around it.
  • RViz visualization showing sensor input values, pose estimation, particle filter belief, maps, etc.
  • A small terminal that you can use to move the robot around (upper-left corner on the screen capture)

These windows can bee seen in the following screen capture on the right-hand side (Gazebo), left-hand side (RViz), and upper-right corner (terminal).

Demo running

You can close the demo by pressing Ctrl+C in the terminal where you initially started it.

Available demos

This table contains the list of predefined demos that you can run:

Alias command Description
lidar_beam_model_hallway_demo Demo using the beluga_amcl node to localize in a world built out of the Cartographer Magazino dataset map. The node is configured to use the beam sensor model configuration.
lidar_likelihood_model_hallway_demo Demo using the beluga_amcl node to localize in a world built out of the Cartographer Magazino dataset map. The node is configured to use the likelihood sensor model configuration.
lidar_beam_model_office_demo Demo using the beluga_amcl node moving around a large office cluttered with unmapped obstacles. The configured sensor model is beam.
lidar_likelihood_model_office_demo Demo using the beluga_amcl node moving around a large office cluttered with unmapped obstacles. Sensor model is likelihood.
apriltags_localization_demo Simple custom localization node using the beluga library to localize the robot within a large $10m \times 10m$ area using Apriltag markers as landmarks. The code of this localization node can be found within this repository here.
light_beacons_localization_demo Simple custom localization node using the beluga library to localize the robot within a large $10m \times 10m$ area using light sources as landmarks. The code of this localization node can be found within this repository here.
nav2_integration_demo Demo using the beluga_amcl node in lieu of the nav2_amcl in a Nav2 stack to navigate around a large office cluttered with unmapped obstacles.

Under the hood

Under this hood this repository is a ROS 2 workspace with a few packages and a custom Docker image that contains all the dependencies needed to run the demos.

The usual ROS 2 tooling can be used to build, launch and examine the demos inside the Docker container, if so desired.

The demo_build alias command is a wrapper around the colcon build tool that builds the workspace and sources it. It's equivalent to running the following commands:

cd ~/ws
colcon build --symlink-install
source install/setup.bash

The individual demo alias commands are just wrappers around the ros2 launch command. For instance, the lidar_likelihood_model_hallway_demo alias is equivalent to running the following command after having built the workspace:

ros2 launch beluga_demo_lidar_localization demo_hallway_likelihood_localization.launch.py

The full list of available aliases can be found in this file.

beluga-demos's People

Contributors

glpuga avatar alondruck avatar hidmic avatar

Stargazers

Olmer Garcia-Bedoya avatar  avatar  avatar Javier Balloffet avatar

Watchers

 avatar  avatar

Forkers

pvela2017

beluga-demos's Issues

Adapt Beluga demos to Beluga 2.0.1

Description

The beluga::mixin is no longer used in the Beluga project as of version 2.0.1. Consequently, the beluga_demo_landmark_localization package fails to compile with this version.

To resolve this issue, please update the beluga_demo_landmark_localization package to be compatible with Beluga 2.0.1. You can use the beluga_amcl package as a reference for the necessary changes.

Demo Beluga on an Andino robot

Feature description

Time for Beluga to meet Andino. A simple 2D localization demo would do, but we can also aim for a landmark-based demo using Andino's monocular camera.

Implementation considerations

Until Beluga is released, adding the dependency over at the Andino repository is a no-go. A release is bound to happen in about 3 months (May 2024). We can setup the demo here in the meantime.

Add ROS-agnostic 2D localization demo

Feature description

Beluga boasts about being decoupled from ROS, but every single example and demo we have so far relies on ROS in one way or another. We need a demo, a basic 2D localization demo is enough, that proves it is possible to stay away from ROS.

Implementation considerations

Perhaps the trickiest part of this demo is going to be emulating or simulating the target system. An in-code simulation with MuJoCo may be the easiest to integrate in C++ while keeping code complexity low. There's also Drake, but that's a steeper learning curve.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.