Coder Social home page Coder Social logo

mrchannon / vgn Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ethz-asl/vgn

0.0 0.0 0.0 2.03 MB

Real-time 6 DOF grasp detection in clutter.

License: BSD 3-Clause "New" or "Revised" License

CMake 0.10% Python 90.30% Jupyter Notebook 9.60%

vgn's Introduction

Volumetric Grasping Network

VGN is a 3D convolutional neural network for real-time 6 DOF grasp pose detection. The network accepts a Truncated Signed Distance Function (TSDF) representation of the scene and outputs a volume of the same spatial resolution, where each cell contains the predicted quality, orientation, and width of a grasp executed at the center of the voxel. The network is trained on a synthetic grasping dataset generated with physics simulation.

overview

This repository contains the implementation of the following publication:

  • M. Breyer, J. J. Chung, L. Ott, R. Siegwart, and J. Nieto. Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter. Conference on Robot Learning (CoRL 2020), 2020. [pdf][video]

If you use this work in your research, please cite accordingly.

The next sections provide instructions for getting started with VGN.

Installation

The following instructions were tested with python3.8 on Ubuntu 20.04. A ROS installation is only required for visualizations and interfacing hardware. Simulations and network training should work just fine without.

OpenMPI is optionally used to distribute the data generation over multiple cores/machines.

sudo apt install libopenmpi-dev

Clone the repository into the src folder of a catkin workspace.

git clone https://github.com/ethz-asl/vgn

Create and activate a new virtual environment.

cd /path/to/vgn
python3 -m venv --system-site-packages .venv
source .venv/bin/activate

Install the Python dependencies within the activated virtual environment.

pip install -r requirements.txt

Build and source the catkin workspace,

catkin build vgn
source /path/to/catkin_ws/devel/setup.zsh

or alternatively install the project locally in "editable" mode using pip.

pip install -e .

Finally, download the data folder here, then unzip and place it in the repo's root.

Data Generation

Generate raw synthetic grasping trials using the pybullet physics simulator.

python scripts/generate_data.py data/raw/foo --scene pile --object-set blocks [--num-grasps=...] [--sim-gui]
  • python scripts/generate_data.py -h prints a list with all the options.
  • mpirun -np <num-workers> python ... will run multiple simulations in parallel.

The script will create the following file structure within data/raw/foo:

  • grasps.csv contains the configuration, label, and associated scene for each grasp,
  • scenes/<scene_id>.npz contains the synthetic sensor data of each scene.

The data.ipynb notebook is useful to clean, balance and visualize the generated data.

Finally, generate the voxel grids/grasp targets required to train VGN.

python scripts/construct_dataset.py data/raw/foo data/datasets/foo
  • Samples of the dataset can be visualized with the vis_sample.py script and vgn.rviz configuration. The script includes the option to apply a random affine transform to the input/target pair to check the data augmentation procedure.

Network Training

python scripts/train_vgn.py --dataset data/datasets/foo [--augment]

Training and validation metrics are logged to TensorBoard and can be accessed with

tensorboard --logdir data/runs

Simulated Grasping

Run simulated clutter removal experiments.

python scripts/sim_grasp.py --model data/models/vgn_conv.pth [--sim-gui] [--rviz]
  • python scripts/sim_grasp.py -h prints a complete list of optional arguments.
  • To detect grasps using GPD, you first need to install and launch the gpd_ros node (roslaunch vgn gpd.launch).

Use the clutter_removal.ipynb notebook to compute metrics and visualize failure cases of an experiment.

Robot Grasping

This package contains an example of open-loop grasp execution on a Franka Emika Panda with a wrist-mounted Intel Realsense D435 depth sensor.

First, launch the robot and sensor drivers

roslaunch vgn panda_grasp.launch

Then in a second terminal, run

python scripts/panda_grasp.py --model data/models/vgn_conv.pth

Citing

@inproceedings{breyer2020volumetric,
 title={Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter},
 author={Breyer, Michel and Chung, Jen Jen and Ott, Lionel and Roland, Siegwart and Juan, Nieto},
 booktitle={Conference on Robot Learning},
 year={2020},
}

To Do

  • Verify the panda scripts on ROS Noetic.

vgn's People

Contributors

mbreyer avatar aslpanda avatar floriantschopp avatar vfdev-5 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.