Coder Social home page Coder Social logo

madrona's Introduction


A GPU-Accelerated Game Engine for Batch Simulation

Madrona is a prototype game engine for creating high-throughput, GPU-accelerated simulators that run thousands of virtual environment instances, and generate millions of aggregate simulation steps per second, on a single GPU. (We like to refer to this as "batch simulation".) This efficiency is useful for high-performance AI agent training (e.g., via reinforcement learning), or for any task that requires a high-performance environment simulator tightly integrated "in-the-loop" of a broader application.

Please see the Madrona engine project page for more information, as well as the Madrona FAQ.

Features:

  • Fully GPU-driven batch ECS implementation for high-throughput execution.
  • CPU backend for debugging and visualization. Simulators can execute on GPU or CPU with no code changes.
  • Export ECS simulation state as PyTorch tensors for efficient interoperability with learning code.
  • (Optional) XPBD rigid body physics for basic 3D collision and contact support.
  • (Optional) Simple 3D renderer for visualizing agent behaviors and debugging.

Disclaimer: The Madrona engine is a research code base. We hope to attract interested users / collaborators with this release, however there will be missing features / documentation / bugs, as well as breaking API changes as we continue to develop the engine. Please post any issues you find on this github repo.

Technical Paper

For more background and technical details on Madrona's design, please read our SIGGRAPH 2023 paper:

An Extensible, Data-Oriented Architecture for High-Performance, Many-World Simulation. Shacklett et al. 2023

Madrona uses an Entity Component System (ECS) architecture for defining game state and expressing game logic. For general background and tutorials on ECS programming abstractions and the motivation for the ECS design pattern's use in games, we recommend Sander Mertens' excellent ECS FAQ.

Example Madrona-Based Simulators

Madrona itself is not an RL environment simulator. It is a game engine / framework that makes it easier for developers (like RL researchers) to implement their own new environment simulators that achieve high performance by running batch simulations on GPUs, and tightly integrating those simulation outputs with learning code. Here are a few environment simulators written in Madrona.

  • A simple 3D environment that demonstrates the use of Madrona's ECS APIs, as well as physics and rendering functionality, via a simple task where agents must learn to press buttons and push blocks to advance through a series of rooms.
  • A high-throughput Madrona rewrite of the Overcooked-AI environment, a multi-agent learning environment based on the cooperative video game. Check out this repo for a Colab notebook that allows you to train overcooked agents that demonstrate optimal play in about two minutes.
  • The canonical RL training environment.

Dependencies

Supported Platforms

  • Linux, Ubuntu 18.04 or newer
    • Other distros with equivalent or newer kernel / GLIBC versions will also work
  • MacOS 13.x Ventura (or newer)
    • Requires full Xcode 14 install (not just Xcode Command Line Tools)
    • Currently no testing / support for Intel Macs
  • Windows 11
    • Requires Visual Studio 16.4 (or newer) with recent Windows SDK

General Dependencies

  • CMake 3.24 (or newer)
  • Python 3.9 (or newer)

GPU-Backend Dependencies

  • Volta or newer NVIDIA GPU
  • CUDA 12.1 or newer (+ appropriate NVIDIA drivers)
  • Linux (CUDA on Windows lacks certain unified memory features that Madrona requires)

These dependencies are needed for the GPU backend. If they are not present, Madrona's GPU backend will be disabled, but you can still use the CPU backend.

Getting Started

Madrona is intended to be integrated as a library / submodule of simulators built on top of the engine. Therefore, you should start with one of our example simulators, rather than trying to build this repo directly.

As a starting point for learning how to use the engine, we recommend the Madrona Escape Room project. This is a simple 3D environment that demonstrates the use of Madrona's ECS APIs, as well as physics and rendering functionality, via a simple task where agents must learn to press buttons and pull blocks to advance through a series of rooms.

For ML-focused users interested in training agents at high speed, we recommend you check out the Madrona RL Environments repo that contains an Overcooked-AI implementation where you can train agents in two minutes using Google Colab, as well as Hanabi and Cartpole implementations.

If you are interested in authoring a new simulator on top of Madrona, we recommend forking one of the above projects and adding your own functionality, or forking the Madrona Simple Example repository for a code base with very little existing logic to get in your way. Basing your work on one of these repositories will ensure the CMake build system and python bindings are setup correctly.

Building:

Instructions on building and testing the Madrona Escape Room simulator are included below for Linux and MacOS:

git clone --recursive https://github.com/shacklettbp/madrona_escape_room.git
cd madrona_escape_room
pip install -e . 
mkdir build
cd build
cmake ..
make -j # Num Cores To Build With

You can then view the environment by running:

./build/viewer

Please refer to the Madrona Escape Room simulator's github page for further context / instructions on how to train agents.

Windows Instructions: Windows users should clone the repository as above, and then open the root of the cloned repo in Visual Studio and build with the integrated CMake support. By default, Visual Studio has a build directory like out/build/Release-x64, depending on your build configuration. This requires changing the pip install command above to tell python where the C++ python extensions are located:

pip install -e . -Cpackages.madrona_escape_room.ext-out-dir=out/build/Release-x64

Code Organization

We recommend starting with the Madrona Escape Room project for learning how to use Madrona's ECS APIs, as documentation within Madrona itself is still fairly minimal.

Nevertheless, the following files provide good starting points to dive into the Madrona codebase:

The Context class: include/madrona/context.hpp includes the core ECS API entry points for the engine (creating entities, getting components, etc): . Note that the linked file is the header for the CPU backend. The GPU implementation of the same interface lives in src/mw/device/include/madrona/context.hpp. Although many of the headers in include/madrona are shared between the CPU and GPU backends, the GPU backend prioritizes files in src/mw/device/include in order to use GPU specific implementations. This distinction should not be relevant for most users of the engine, as the public interfaces of both backends match.

The ECSRegistry class: include/madrona/registry.hpp is where user code registers all the ECS Components and Archetypes that will be used during the simulation. Note that Madrona requires all the used archetypes to be declared up front -- unlike other ECS engines adding and removing components dynamically from entities is not currently supported.

The TaskGraphBuilder class: include/madrona/taskgraph_builder.hpp includes the interface for building the task graph that will be executed to step the simulation across all worlds.

The MWCudaExecutor class: include/madrona/mw_gpu.hpp is the entry point for the GPU backend.

The TaskGraphExecutor class: include/madrona/mw_cpu.hpp is the entry point for the CPU backend.

Citation

If you use Madrona in a research project, please cite our SIGGRAPH 2023 paper:

@article{shacklett23madrona,
    title   = {An Extensible, Data-Oriented Architecture for High-Performance, Many-World Simulation},
    author  = {Brennan Shacklett and Luc Guy Rosenzweig and Zhiqiang Xie and Bidipta Sarkar and Andrew Szot and Erik Wijmans and Vladlen Koltun and Dhruv Batra and Kayvon Fatahalian},
    journal = {ACM Trans. Graph.},
    volume = {42},
    number = {4},
    year    = {2023}
}

madrona's People

Contributors

kayvonf avatar llguy avatar peter-hd avatar saran-t avatar shacklettbp avatar stafah avatar warrenxiagg avatar xiezhq-hermann avatar zandermajercik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

madrona's Issues

roadmap?

This is a very interesting project. Since 2021, there has been a series of work on GPU simulation and GPU Batch rendering, but this project shows the possibility of a one-stop solution. I would like to know if there is a roadmap for this project? I noticed that there is another project trying to integrate MJX. Will this project use Jax as the scripting language and use MJX as the default physics engine in the future?

question

Our company is interesting of your project.
Do you have a business mature release version ?

Conditional systems - Only trigger system if another system returns True

Hello! This is an amazing library and I am excited to use it in my project. One question I have - does Madrona support conditionally running a system depending on a trigger (like only run SystemB when SystemA returns True)?

My project essentially involves agents performing N actions moving items on a board before finally a simulation is run to get the board result for final reward. In other ECS libraries such as Unity DOTS or Bevy, I setup the ECS systems so the agent-action systems would run when a trigger system determined if BoardPhase.isPlacing = true and the final-reward simulation systems would run otherwise. I was thinking to replicate something similar in Madrona, but cannot find any conditional system trigger logic in the source code.

Does this exist? If not, what code files would be a good starting point for me to add this functionality?

Intel mac build broken

I'm getting the following error when trying to configure cmake on my Intel mac:

(base) samk@Samans-MBP build % cmake -DCMAKE_OSX_ARCHITECTURES=x86_64 -DCMAKE_BUILD_TYPE=Debug ../
-- Populating madronabundledtoolchain
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/samk/src/gpudrive/build/_deps/madronabundledtoolchain-subbuild
[ 11%] No configure step for 'madronabundledtoolchain-populate'
[ 22%] No build step for 'madronabundledtoolchain-populate'
[ 33%] No install step for 'madronabundledtoolchain-populate'
[ 44%] No test step for 'madronabundledtoolchain-populate'
[ 55%] Completed 'madronabundledtoolchain-populate'
[100%] Built target madronabundledtoolchain-populate
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - failed
-- Check for working C compiler: /Users/samk/src/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/Toolchains/LLVM17.0.5.xctoolchain/usr/bin/clang
-- Check for working C compiler: /Users/samk/src/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/Toolchains/LLVM17.0.5.xctoolchain/usr/bin/clang - broken
CMake Error at /usr/local/Cellar/cmake/3.25.1/share/cmake/Modules/CMakeTestCCompiler.cmake:70 (message):
  The C compiler

    "/Users/samk/src/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/Toolchains/LLVM17.0.5.xctoolchain/usr/bin/clang"

  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: /Users/samk/src/gpudrive/build/CMakeFiles/CMakeScratch/TryCompile-diUooz
    
    Run Build Command(s):/usr/bin/make -f Makefile cmTC_79e58/fast && /Applications/Xcode.app/Contents/Developer/usr/bin/make  -f CMakeFiles/cmTC_79e58.dir/build.make CMakeFiles/cmTC_79e58.dir/build
    Building C object CMakeFiles/cmTC_79e58.dir/testCCompiler.c.o
    /Users/samk/src/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/Toolchains/LLVM17.0.5.xctoolchain/usr/bin/clang   -arch x86_64 -o CMakeFiles/cmTC_79e58.dir/testCCompiler.c.o -c /Users/samk/src/gpudrive/build/CMakeFiles/CMakeScratch/TryCompile-diUooz/testCCompiler.c
    make[1]: /Users/samk/src/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/Toolchains/LLVM17.0.5.xctoolchain/usr/bin/clang: Bad CPU type in executable
    make[1]: *** [CMakeFiles/cmTC_79e58.dir/testCCompiler.c.o] Error 1
    make: *** [cmTC_79e58/fast] Error 2
    
    

  

  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:6 (project)


-- Configuring incomplete, errors occurred!
See also "/Users/samk/src/gpudrive/build/CMakeFiles/CMakeOutput.log".
See also "/Users/samk/src/gpudrive/build/CMakeFiles/CMakeError.log".

This is not blocking me but it would be nice if I could develop locally. The regression must have been introduced between 2669441 (August 5, 2023) and 5911a3d (March 15, 2024).

How to modify Madrona to support addressing read-write conflicts?

Hello,
I'm aiming to implement some specific systems that might run into read-write conflicts due to multiple-to-one situations. How can I adapt Madrona to support resolving these read-write conflicts? Where should I begin, like which parts of the code should I concentrate on (I've attempted to read the code, but the vast majority of it lacks comments, making it quite strenuous to understand)?
Thanks!

Multi-GPU support?

First of all, I want to say great work!

I was just wondering if Madrona supports a host PC with multiple GPU devices? According to my best understanding, this feature is not supported; from a cursory glance through the codebase, cudaSetDevice() is used to set the current GPU device to use, but I haven't figured out where, when, and how many times it's being called. If it's being called just once, then it implies only 1 GPU is being used..

Please correct my understanding! Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.