Coder Social home page Coder Social logo

leg_tracker_benchmarks's Introduction

Leg Tracker Benchmarks

for ROS Indigo

  • Repo for recreating results in paper: A. Leigh, J. Pineau, N. Olmedo and H. Zhang, Person Tracking and Following with 2D Laser Scanners, International Conference on Robotics and Automation (ICRA), Seattle, Washington, USA, 2015

Usage

Installation

  • clone leg_tracker_benchmarks repo into your indigo catkin source directory
  • extract all zip files in place in 'benchmark_rosbags/annotated' folder
  • install SciPy:
    • $ sudo apt-get install python-scipy
  • install pykalman (http://pykalman.github.io/#installation or https://pypi.python.org/pypi/pykalman):
    • $ sudo pip install pykalman
  • install munkres for Python:
    • $ sudo pip install munkres
  • install the baysian filter library
    • $ sudo apt-get install ros-indigo-bfl
  • $ cd [your catkin_workspace]
  • $ catkin_make

If you currently have the leg_tracker repo in your catkin_workspace, you will have to move it because leg_tracker_benchmarks has a version of the same package in it as well.

Running the tracking benchmarks

  • $ roslaunch leg_tracker [benchmark name]

This will read-in the corresponding rosbag in benchmark_rosbags/annotated, publish it in a deterministic manner (so there's no race conditions or time-dependant inconsistencies between runs) using playback_and_record_tracked.py. The tracker's position estimates will be saved to benchmark_rosbags/annotated_and_tracked.

Note that there's a duplicate set of launch files for runtime benchmarks, which just run everything as fast as possible without worrying about race conditions.

Feel free to uncomment the command to view output in Rviz in the launch file to see the visualizations.

CLEAR MOT results should be very close to the reference paper. To reproduce results exactly, checkout the oldest commit in this repo. Caveat is that it is not as well documented and the code is messier than at head.

Running the benchmark CLEAR MOT evaluation

  • $ roslaunch clear_mot [benchmark name]

This will read-in the tracker output data from benchmarks/annotated_and_tracked, evaluate the CLEAR MOT metric performance and write the output to benchmarks/annotated_and_tracked_and_clear_mot. It will also print the CLEAR MOT scores to the terminal.

You can visualize the CLEAR MOT errors and results using

  • $ roslaunch clear_mot view_results.launch

This launches an Rviz window with a panel which allows you to step through the data and shows all the CLEAR MOT data associations and errors. To view different files, change the "readbag_filename" param in launch file.

Annotating ground truth in new data

  • $ roslaunch annotate_rosbags annotate.launch

This is the tool I used to annotate the ground truth tracks in the rosbags. After launching, you can simply click on the person's location in the map and it will save the location of your click, and advance to the next scan.

I also used split.launch and merge.launch for rosbags which were too big and unwieldy. This way, you can split long rosbags, annotate them individually and merge them afterwards. Then, if you make a mistake in your annotation, it's much easier fix because it's isolated to one rosbag.

Camera data

Camera data is also available from two separate webcams mounted immediately on top of the laser scanner. Only the compressed image stream is included to keep the files to a reasonable size. One way to get the raw image topics from the compressed topics is using image_transport

  • $ rosrun image_transport republish compressed in:=/vision/image _image_transport:=compressed raw out:=/vision/image

You can also view them with the image_view package

  • $ rosrun image_view image_view image:=/usb_cam_node2/image_raw compressed

Unfortunately, the cameras are not calibrated as we only anticipated using them as a rough guide for ground-truth annotation and not for image processing.

To keep this repo to a reasonable size and so folks who don't want the images aren't forced to download them, the images are stored in a separate repo on BitBucket: https://bitbucket.org/aleigh/leg_tracker_benchmarks_jpg. It's about 1Gb in size.

To use the rosbags with the camera data, you should copy them into the benchmark_rosbags/annotated folder and replace the existing rosbags there.

leg_tracker_benchmarks's People

Contributors

angusleigh avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.