Coder Social home page Coder Social logo

alexanderbelooussov / habitat-lab Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ram81/habitat-imitation-baselines

0.0 0.0 0.0 55.44 MB

Code for training embodied agents using imitation learning at scale in Habitat-Lab

License: MIT License

Shell 0.65% Python 81.74% Jupyter Notebook 17.61%

habitat-lab's Introduction

Custom fork of:https://github.com/Ram81/habitat-imitation-baselines

For master thesis at University of Antwerp: "Offline Reinforcement Learning for Robotic Navigation"

Changes

  • added missing requirements
    • python-lmdb needs to be installed via conda, pip will not work
  • Fixed issue in habitat-baselines/utils/env_utils.py
  • Configs for running on single scenes
  • Added Offline RL algorithms from https://github.com/tinkoff-ai/CORL
  • Added scripts for running and evaluating offline RL algorithms

Habitat-Web

Code for training imitation learning agents for Objectnav and Pick-and-Place in Habitat. This repo is the official code repository for the paper Habitat-Web: Learning Embodied Object-Search from Human Demonstrations at Scale

Reproducing Results

We provide best checkpoints for agents trained on ObjectNav and Pick-and-Place. You can use the following checkpoints to reproduce results reported in our paper.

Task Split Checkpoint Success Rate SPL
๐Ÿ†•ObjectNav v1 objectnav_semseg.ckpt 27.8 9.9
๐Ÿ†•Pick-and-Place New Initializations pick_place_rgbd_new_inits.ckpt 17.5 9.8
๐Ÿ†•Pick-and-Place New Instructions pick_place_rgbd_new_insts.ckpt 15.1 8.3
๐Ÿ†•Pick-and-Place New Environments pick_place_rgbd_new_envs.ckpt 8.3 4.1

You can find the pretrained RedNet semantic segmentation model weights here and the pretrained depth encoder weights here.

Overview

The primary code contributions from the paper are located in:

  • Imitation Learning Baselines:

    • ObjectNav: habitat_baselines/il/env_based/
    • Pick-and-Place: habitat_baselines/il/disk_based/
  • Experiment Configurations:

    • ObjectNav: habitat_baselines/config/objectnav/*.yaml
    • Pick-and-Place: habitat_baselines/config/pickplace/*.yaml
  • Replay Scripts:

    • ObjectNav: examples/objectnav_replay.py
    • Pick-and-Place: examples/pickplace_replay.py

Installation

  1. Clone the repository and install habitat-web-baselines using the commands below. Note that python=3.6 is required for working with habitat-web-baselines. All the development was done on habitat-lab=0.1.6.

    git clone https://github.com/Ram81/habitat-web-baselines.git
    cd habitat-web-baselines
    
    # We require python>=3.6 and cmake>=3.10
    conda create -n habitat-web python=3.6 cmake=3.14.0
    conda activate habitat-web
    
    pip install -e .
    python setup.py develop --all
  2. Install our custom build of habitat-sim, we highly recommend using the habitat-sim build from source for working with habitat-web-baselines. Use the following commands to set it up:

    git clone [email protected]:Ram81/habitat-sim.git
    cd habitat-sim
  3. Install dependencies

    Common

    pip install -r requirements.txt

    Linux (Tested with Ubuntu 18.04 with gcc 7.4.0)

    sudo apt-get update || true
    # These are fairly ubiquitous packages and your system likely has them already,
    # but if not, let's get the essentials for EGL support:
    sudo apt-get install -y --no-install-recommends \
         libjpeg-dev libglm-dev libgl1-mesa-glx libegl1-mesa-dev mesa-utils xorg-dev freeglut3-dev

    See this configuration for a full list of dependencies that our CI installs on a clean Ubuntu VM. If you run into build errors later, this is a good place to check if all dependencies are installed.

  4. Build Habitat-Sim

    Default build with bullet (for machines with a display attached)

    # Assuming we're still within habitat conda environment
    ./build.sh --bullet

    For headless systems (i.e. without an attached display, e.g. in a cluster) and multiple GPU systems

    ./build.sh --headless --bullet
  5. For use with habitat-web-baselines and your own python code, add habitat-sim to your PYTHONPATH. For example modify your .bashrc (or .bash_profile in Mac OS X) file by adding the line:

    export PYTHONPATH=$PYTHONPATH:/path/to/habitat-sim/

Data

Downloading MP3D Scene Dataset

Downloading Object Assets

  • Download the object assets used for Pick-and-Place task and THDA ObjectNav episodes from here.

  • Unzip the object assets and verify they are stored at data/test_assets/objects/.

Downloading Human Demonstrations Dataset

You can use the following datasets to reproduce results reported in our paper.

Dataset Scene dataset Split Link Extract path
ObjectNav-HD MP3D 70k objectnav_mp3d_70k.json.gz data/datasets/objectnav/objectnav_mp3d_70k/
ObjectNav-HD MP3D+Gibson Full objectnav_mp3d_gibson_80k.json.gz data/datasets/objectnav/objectnav_mp3d_gibson_80k/
Pick-and-Place-HD MP3D Full pick_place_12k.json.gz data/datasets/pick_place/pick_place_12k/
Pick-and-Place-HD MP3D New Initializations pick_place_unseen_initializations.json.gz data/datasets/pick_place/unseen_initializations/
Pick-and-Place-HD MP3D New Instructions pick_place_unseen_instructions.json.gz data/datasets/pick_place/unseen_instructions/
Pick-and-Place-HD MP3D New Environments pick_place_unseen_scenes.json.gz data/datasets/pick_place/unseen_scenes/

The demonstration datasets released as part of this project are licensed under a Creative Commons Attribution-NonCommercial 4.0 License.

Dataset Folder Structure

The code requires the datasets in data folder in the following format:

โ”œโ”€โ”€ habitat-web-baselines/
โ”‚  โ”œโ”€โ”€ data
โ”‚  โ”‚  โ”œโ”€โ”€ scene_datasets/
โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ mp3d/
โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ JeFG25nYj2p.glb
โ”‚  โ”‚  โ”‚  โ”‚  โ””โ”€โ”€ JeFG25nYj2p.navmesh
โ”‚  โ”‚  โ”œโ”€โ”€ datasets
โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ objectnav/
โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ objectnav_mp3d_70k/
โ”‚  โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ train/
โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ pick_place/
โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ pick_place_12k/
โ”‚  โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ train/
โ”‚  โ”‚  โ”œโ”€โ”€ test_assets/
โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ objects/
โ”‚  โ”‚  โ”‚  โ”‚  โ”œโ”€โ”€ apple.glb
โ”‚  โ”‚  โ”‚  โ”‚  โ””โ”€โ”€ plate.glb

Packaging Demonstration Datasets

We also provide an example of packaging your own demonstrations dataset to train imitation learning agents with habitat-imitation-baselines here.

Test Setup

To verify that the data is set up correctly, run:

python examples/objectnav_replay.py --path data/datasets/objectnav/objectnav_mp3d_70k/sample/sample.json.gz

Usage

Training

For training the behavior cloning policy on the ObjectGoal Navigation task using the environment based setup:

  1. Use the following script for multi-node training
sbatch job_scripts/run_objectnav_training.sh habitat_baselines/config/objectnav/il_ddp_objectnav.yaml
  1. To run training on a single node use:
sbatch job_scripts/run_objectnav_training.sh habitat_baselines/config/objectnav/il_objectnav.yaml

For training the behavior cloning policy on the Pick-and-Place task using the disk based setup:

  1. Use the following script for multi-node training
sbatch job_scripts/run_pickplace_training.sh ddp
  1. To run training on a single node use:
sbatch job_scripts/run_pickplace_training.sh single_node

Evaluation

To evaluate pretrained checkpoint on ObjectGoal Navigation, download the objectnav_mp3d_v1 dataset from here.

For evaluating a checkpoint on the ObjectGoal Navigation task using the environment based setup:

  1. Use the following script if trained using distributed setup
sbatch job_scripts/run_objectnav_eval.sh habitat_baselines/config/objectnav/il_ddp_objectnav.yaml data/datasets/objectnav_mp3d_v1 /path/to/checkpoint
  1. Use the following script for evaluating single node checkpoint
sbatch job_scripts/run_objectnav_eval.sh habitat_baselines/config/objectnav/il_objectnav.yaml data/datasets/objectnav_mp3d_v1 /path/to/checkpoint

For evaluating the behavior cloning policy on the Pick-and-Place task using the disk based setup:

  1. Use the following script if trained using dristributed setup
sbatch job_scripts/run_pickplace_eval.sh ddp
  1. Use the following script for evaluating single node checkpoint
sbatch job_scripts/run_pickplace_eval.sh single_node

Citation

If you use this code in your research, please consider citing:

@inproceedings{ramrakhya2022,
      title={Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale},
      author={Ram Ramrakhya and Eric Undersander and Dhruv Batra and Abhishek Das},
      year={2022},
      booktitle={{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
}

habitat-lab's People

Contributors

alexanderbelooussov avatar ram81 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.