Coder Social home page Coder Social logo

by571 / cql Goto Github PK

View Code? Open in Web Editor NEW
102.0 4.0 22.0 29.33 MB

PyTorch implementation of the Offline Reinforcement Learning algorithm CQL. Includes the versions DQN-CQL and SAC-CQL for discrete and continuous action spaces.

Python 100.00%
reinforcement-learning-algorithms offline-reinforcement-learning dqn sac pytorch-implementation discrete-sac pytorch machine-learning

cql's Introduction

DQN-Atari-Agents

Modularized training of different DQN Algorithms.

This repository contains several Add-ons to the base DQN Algorithm. All versions can be trained from one script and include the option to train from raw pixel or ram digit data. Recently added multiprocessing to run several environments in parallel for faster training.

Following DQN versions are included:

  • DDQN
  • Dueling DDQN

Both can be enhanced with Noisy layer, Per (Prioritized Experience Replay), Multistep Targets and be trained in a Categorical version (C51). Combining all these add-ons will lead to the state-of-the-art Algorithm of value-based methods called: Rainbow.

Planned Add-ons:

  • Parallel Environments for faster training (wall clock time) [X]
  • Munchausen RL [ ]
  • DRQN (recurrent DQN) [ ]
  • Soft-DQN [ ]
  • Curiosity Exploration [X] currently only for DQN

Train your Agent:

Dependencies

Trained and tested on:

Python 3.6 
PyTorch 1.4.0  
Numpy 1.15.2 
gym 0.10.11 

To train the base DDQN simply run python run_atari_dqn.py To train and modify your own Atari Agent the following inputs are optional:

example: python run_atari_dqn.py -env BreakoutNoFrameskip-v4 -agent dueling -u 1 -eps_frames 100000 -seed 42 -info Breakout_run1

  • agent: Specify which type of DQN agent you want to train, default is DQN - baseline! Following agent inputs are currently possible: dqn, dqn+per, noisy_dqn, noisy_dqn+per, dueling, dueling+per, noisy_dueling, noisy_dueling+per, c51, c51+per, noisy_c51, noisy_c51+per, duelingc51, duelingc51+per, noisy_duelingc51, noisy_duelingc51+per, rainbow
  • env: Name of the atari Environment, default = PongNoFrameskip-v4
  • frames: Number of frames to train, default = 5 mio
  • seed: Random seed to reproduce training runs, default = 1
  • bs: Batch size for updating the DQN, default = 32
  • layer_size: Size of the hidden layer, default=512
  • n_step: Number of steps for the multistep DQN Targets
  • eval_every, Evaluate every x frames, default = 50000
  • eval_runs, Number of evaluation runs, default = 5
  • m: Replay memory size, default = 1e5
  • lr: Learning rate, default = 0.00025
  • g: Discount factor gamma, default = 0.99
  • t: Soft update parameter tat, default = 1e-3
  • eps_frames: Linear annealed frames for Epsilon, default = 150000
  • min_eps: Epsilon greedy annealing crossing point. Fast annealing until this point, from there slowly to 0 until the last frame, default = 0.1
  • ic, --intrinsic_curiosity, Adding intrinsic curiosity to the extrinsic reward. 0 - only reward and no curiosity, 1 - reward and curiosity, 2 - only curiosity, default = 0.
  • info: Name of the training run.
  • fill_buffer: Adding samples to the replay buffer based on a random policy, before agent-env-interaction. Input numer of preadded frames to the buffer, default = 50000
  • save_model: Specify if the trained network shall be saved [1] or not [0], default is 1 - saved!
  • w, --worker: Number of parallel environments

Training Progress can be view with Tensorboard

Just run tensorboard --logdir=runs/

Atari Games Performance:

Pong:

Hyperparameters:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 300000
  • lr: 1e-4
  • m: 10000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 100000
  • min_eps: 0.01
  • fill_buffer: 10000

Pong

Convergence prove for the CartPole Environment

Since training for the Algorithms for Atari takes a lot of time I added a quick convergence prove for the CartPole-v0 environment. You can clearly see that Raibow outperformes the other two methods Dueling DQN and DDQN.

rainbow

To reproduce the results following hyperparameter where used:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 30000
  • lr: 1e-3
  • m: 500000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 1000
  • min_eps: 0.1
  • fill_buffer: 50000

Its interesting to see that the add-ons have a negative impact for the super simple CartPole environment. Still the Dueling DDQN version performs clearly better than the standard DDQN version.

dqn

dueling

Parallel Environments

To reduce wall clock time while training parallel environments are implemented. Following diagrams show the speed improvement for the two envrionments CartPole-v0 and LunarLander-v2. Tested with 1,2,4,6,8,10,16 worker. Each number of worker was tested over 3 seeds.

Convergence behavior for each worker number can be found: CartPole-v0 and LunarLander

Help and issues:

Im open for feedback, found bugs, improvements or anything. Just leave me a message or contact me.

Paper references:

Author

  • Sebastian Dittert

Feel free to use this code for your own projects or research. For citation:

@misc{DQN-Atari-Agents,
  author = {Dittert, Sebastian},
  title = {DQN-Atari-Agents:   Modularized PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow and DRQN},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/BY571/DQN-Atari-Agents}},
}

cql's People

Contributors

by571 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cql's Issues

CQL Loss

Hi,
In the DQN version there are 2 losses of which one is commented.
I guess the commented one is what is used in the reference implementation, but I am not sure.
Could you please explain a little bit about this?

Offline CQL performance on Hopper expert env

Hi, thank you for sharing the torch version code of CQL :)
I found as training on Hopper-expert-v2 env, CQL-SAC performs poorly.
Don't know why that happends.

Following are the relevant plots.
image
image
image

The training step of CQL-SAC.

I am studying by referring to your CQL code.
But, I think Line 68 not be proper to Offline RL when I run the train.py of CQL-SAC.
Line 68 : buffer.add(state, action, reward, next_state, done)

Isn't this line an off-policy model by putting data that interacts with the agent and the environment into a buffer?

I thank you for your hard works.

Some question with CQL

First, thanks your implementation of so many CQL. The below question are some related to your implementation, and some are related to CQL itself.

  1. why the returned value of function _compute_policy_values in CQL-SAC is qs1 - log_pis.detach(), qs2 - log_pis.detach() with detached log_pis, I think it should not be detached.
  2. what is the meaning of self.temp and self.cql_weight in CQL-SAC?I think self.cql_weight is duplicated as cql_alpha has a similar meaning.
  3. Is it essential to use two q states in cql?
  4. In CQL-SAC-Discrete, I think the q1 inside cql1_scaled_loss = torch.logsumexp(q1, dim=1).mean() - q1.mean() should be an expect over all optional q(s,a), but not the best one, am I wrong?
    5.In CQL-SAC, why retain_graph=True for the Lagrange and critic optimizer?
  5. the most important question: according to p29 from paper, for continuous action, to calc the logsumexp object, both q from uniform and q from pi are used, but why also use actions from pi here? I asked also here, but still at a loss.

And I know some CQL question should be ask from the original repo, but the author is no longer active.

Potential typo in the CQL implementation

Hi, I noticed a potential typo in your implementation of CQL for the Atari game. In the file "/CQL/CQL-DQN/agent.py", in line 52, you subtract Q_a_s.mean() from the first term, though from the formulations in the original paper and the original implementation in tensorflow (https://github.com/aviralkumar2907/CQL/blob/master/atari/batch_rl/multi_head/quantile_agent.py), this term needs to be weighted based on the actual actions in the mini-batch. Since you already calculate the Q_expected, you just need to replace Q_a_s with Q_expected. So line 52 will become:

cql_loss = torch.logsumexp(Q_a_s, dim=1).mean() - Q_expected.mean()

Please let me know if I'm wrong, but I did double check this with the source code and the formulations in the paper.

Offline CQL Performance on Atari

Thanks a lot for the CQL implementation! I wanted to run the offline CQL-SAC for Atari environments and saw your offline training code is for continuous environments only. Have you run any offline experiments on Atari? If so, could you share what level of performance you achieved when compared to online runs? Thanks a lot!

Training for discrete lunar lander envirionment

Hi thankyou for implementing the CQL algorithms. I wan to train CQL-SAC discrete on Lunar Lander environment. However the reward seems to not seem to be converging. Please refer attached. What hyperparameters did you use for Lunar Lander training?
Capture

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.