Coder Social home page Coder Social logo

frankroeder / dreamerv3 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from danijar/dreamerv3

0.0 0.0 0.0 89 KB

Mastering Diverse Domains through World Models

Home Page: https://danijar.com/dreamerv3

License: MIT License

Shell 0.78% Python 98.51% Dockerfile 0.71%

dreamerv3's Introduction

Mastering Diverse Domains through World Models

A reimplementation of DreamerV3, a scalable and general reinforcement learning algorithm that masters a wide range of applications with fixed hyperparameters.

DreamerV3 Tasks

If you find this code useful, please reference in your paper:

@article{hafner2023dreamerv3,
  title={Mastering Diverse Domains through World Models},
  author={Hafner, Danijar and Pasukonis, Jurgis and Ba, Jimmy and Lillicrap, Timothy},
  journal={arXiv preprint arXiv:2301.04104},
  year={2023}
}

To learn more:

DreamerV3

DreamerV3 learns a world model from experiences and uses it to train an actor critic policy from imagined trajectories. The world model encodes sensory inputs into categorical representations and predicts future representations and rewards given actions.

DreamerV3 Method Diagram

DreamerV3 masters a wide range of domains with a fixed set of hyperparameters, outperforming specialized methods. Removing the need for tuning reduces the amount of expert knowledge and computational resources needed to apply reinforcement learning.

DreamerV3 Benchmark Scores

Due to its robustness, DreamerV3 shows favorable scaling properties. Notably, using larger models consistently increases not only its final performance but also its data-efficiency. Increasing the number of gradient steps further increases data efficiency.

DreamerV3 Scaling Behavior

Instructions

Package

If you just want to run DreamerV3 on a custom environment, you can pip install dreamerv3 and copy example.py from this repository as a starting point.

Docker

If you want to make modifications to the code, you can either use the provided Dockerfile that contains instructions or follow the manual instructions below.

Manual

Install JAX and then the other dependencies:

pip install -r requirements.txt

Simple training script:

python example.py

Flexible training script:

python dreamerv3/train.py \
  --run.logdir ~/logdir/$(date "+%Y%m%d-%H%M%S") \
  --configs crafter --batch_size 16 --run.train_every 32

Tips

  • All config options are listed in configs.yaml and you can override them from the command line.
  • The debug config block reduces the network size, batch size, duration between logs, and so on for fast debugging (but does not learn a good model).
  • By default, the code tries to run on GPU. You can switch to CPU or TPU using the --jax.platform cpu flag. Note that multi-GPU support is untested.
  • You can run with multiple config blocks that will override defaults in the order they are specified, for example --configs crafter large.
  • By default, metrics are printed to the terminal, appended to a JSON lines file, and written as TensorBoard summaries. Other outputs like WandB can be enabled in the training script.
  • If you get a Too many leaves for PyTreeDef error, it means you're reloading a checkpoint that is not compatible with the current config. This often happens when reusing an old logdir by accident.
  • If you are getting CUDA errors, scroll up because the cause is often just an error that happened earlier, such as out of memory or incompatible JAX and CUDA versions.
  • You can use the small, medium, large config blocks to reduce memory requirements. The default is xlarge. See the scaling graph above to see how this affects performance.
  • Many environments are included, some of which require installating additional packages. See the installation scripts in scripts and the Dockerfile for reference.
  • When running on custom environments, make sure to specify the observation keys the agent should be using via encoder.mlp_keys, encode.cnn_keys, decoder.mlp_keys and decoder.cnn_keys.
  • To log metrics from environments without showing them to the agent or storing them in the replay buffer, return them as observation keys with log_ prefix and enable logging via the run.log_keys_... options.
  • To continue stopped training runs, simply run the same command line again and make sure that the --run.logdir points to the same directory.

Disclaimer

This repository contains a reimplementation of DreamerV3 based on the open source DreamerV2 code base. It is unrelated to Google or DeepMind. The implementation has been tested to reproduce the official results on a range of environments.

dreamerv3's People

Contributors

danijar avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.