Coder Social home page Coder Social logo

fast_and_efficient's Introduction

Fast and Efficient Locomotion via Learned Gait Transitions

This repository contains the code for the paper "Fast and Efficient Locomotion via Learned Gait Transitions". Apart from implementation of the paper's results, this repository also contains the entire suite of software interface for the A1 quadruped robot, including:

  • A reasonably accurate simulation of the A1 in PyBullet.
  • The real-robot interface in Python, which allows sim-to-real switch to be done simply with a command line flag (--use_real_robot).
  • An implementation of the Convex MPC Controller, which achieves robust locomotion on the robot.

Running Convex MPC Controller:

Setup the environment

First, make sure the environment is setup by following the steps in the Setup section.

Run the code:

python -m src.convex_mpc_controller.convex_mpc_controller_example --show_gui=True --max_time_secs=10 --world=plane

change world argument to be one of [plane, slope, stair, uneven] for different worlds. The current MPC controller has been tuned for all four worlds.

Reproducing Paper Results

Setup the environment

First, make sure the environment is setup by following the steps in the Setup section.

Evaluating Policy

We provide a learned gait policy in example_checkpoints. You can check it out by running:

python -m src.agents.cmaes.eval_cmaes --logdir=example_checkpoint/lin_policy_plus_150.npz --show_gui=True --save_data=False --save_video=False

Please check the Python file for all available command line flags. Note that when running on the real robot (--use_real_robot=True), the code requires a Xbox-like gamepad as an E-stop. See the Code Structure section for further details.

Training

To speed up training, we use ray to parallelize rollouts. First, start ray by running:

ray start --head --port=6379 --num-cpus=[NUM_CPUS] --redis-password=1234

and start training by running:

python -m src.agents.cmaes.train_cmaes --config src/agents/cmaes/configs/gait_change_deluxe.py --experiment_name="exp"

You can then see the checkpoints and tensorboard logs in the logs folder and evaluate the trained policy using eval_cmaes as described above.

Setup

Software

It is recommended to create a separate virtualenv or conda environment to avoid conflicting with existing system packages. The required packages have been tested under Python 3.8.5, though they should be compatiable with other Python versions.

The following three steps are required to set up the Python environment. We tried to consolidate all into a simple setup.py but it does not seem easy.

  1. First, install all dependent packages by running:

    pip install -r requirements.txt
  2. Second, install the C++ binding for the convex MPC controller:

    python setup.py install
  3. Lastly, build and install the interface to Unitree's SDK. The Unitree repo has been releasing new SDK versions. For convenience, we have included the version that we used in third_party/unitree_legged_sdk.

    First, make sure the required packages are installed, following Unitree's guide. Most nostably, please make sure to install Boost and LCM:

    sudo apt install libboost-all-dev liblcm-dev

    Then, go to third_party/unitree_legged_sdk and create a build folder:

    cd third_party/unitree_legged_sdk
    mkdir build && cd build

    Now, build the libraries and move them to the main directory by running:

    cmake ..
    make
    mv robot_interface* ../../..

Additional Setup for Real Robot

Follow these steps if you want to run policies on the real robot.

  1. Setup correct permissions for non-sudo user

    Since the Unitree SDK requires memory locking and high process priority, root priority with sudo is usually required to execute commands. To run the SDK without sudo, write the following to /etc/security/limits.d/90-unitree.conf:

    <username> soft memlock unlimited
    <username> hard memlock unlimited
    <username> soft nice eip
    <username> hard nice eip

    Log out and log back in for the above changes to take effect.

  2. Connect to the real robot

    Connect from computer to the real robot using an Ethernet cable, and set the computer's IP address to be 192.168.123.24 (or anything in the 192.168.123.X range that does not collide with the robot's existing IPs). Make sure you can ping/SSH into the robot's TX2 computer (by default it is [email protected]).

  3. Test connection

    Start up the robot. After the robot stands up, enter joint-damping mode by pressing L2+B on the remote controller. Then, run the following:

    python -m src.robots.a1_robot_exercise_example --use_real_robot=True

    The robot should be moving its body up and down following a pre-set trajectory. Terminate the script at any time to bring the robot back to joint-damping position.

Code Structure

Simulation

The simulation infrastructure is mostly a lightweight wrapper around pybullet that provides convenient APIs for locomotion purposes. The main files are:

  • src/robots/robot.py contains general robot API.
  • src/robots/a1.py contains A1-specific configurations.
  • src/robots/motors.py contains motor configurations.

Real Robot Interface

The real robot infrastructure is mostly implemented in robots/a1_robot.py, which invokes the C++ interface via pybind to communicate with Unitree SDKs. In addition:

  • src/robots/a1_robot_state_estimator.py provides a simple KF-based implementation to estimate the robot's speed.

  • src/robots/gamepad_reader.py contains a simple wrapper to read x-box like gamepads, which is useful for remote-controlling the robot. The gamepads that have been tested to work includes Logitech F710 and GameSir T1s, though any gamepad with similar functionality should likely work.

Convex MPC Controller

The src/convex_mpc_controller folder contains a Python implementation of MIT's Convex MPC Controller. Some notable files include:

  • torque_stance_leg_controller_mpc.py sets up and solves the MPC problem for stance legs.
  • mpc_osqp.cc actually sets up the QP and calls a QP library to solve it.
  • raibert_swing_leg_controller.py controlls swing legs.

Gait Change Environment

The hierarchical gait change environment is defined in src/intermediate_envs.

CMA-ES Agent

The code to train the policy using CMA-ES is in src/agents.

Credits

We thank authors of the following repos for their contributions to our codebase:

  • After many iterations, the simulation infrastructure is now based on Limbsim. We thank Rosario Scalise for his contributions.

  • The original convex MPC implementation is derived from motion_imitation with modifications.

  • The underlying simulator is PyBullet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.