Coder Social home page Coder Social logo

rafael1s / deep-reinforcement-learning-algorithms Goto Github PK

View Code? Open in Web Editor NEW
908.0 16.0 218.0 110.28 MB

32 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.

Python 1.56% Jupyter Notebook 98.44%
deep-rl-algorithms github-udacity dqn-ppo-ddpg dqn td3 cartpole bipedalwalker deep-reinforcement-learning sac carracing hopperbulletenv lunarlander ddpg ppo a2c antbulletenv soft-actor-critic halfcheetahbulletenv walker2dbulletenv

deep-reinforcement-learning-algorithms's Introduction

Deep Reinforcement Learning Algorithms

Here you can find several projects dedicated to the Deep Reinforcement Learning methods.
The projects are deployed in the matrix form: [env x model], where env is the environment
to be solved, and model is the model/algorithm which solves this environment. In some cases,
the same environment is resolved by several algorithms. All projects are presented as
a jupyter notebook containing training log.

The following environments are supported:

AntBulletEnv, BipedalWalker, BipedalWalkerHardcore, CarRacing, CartPole, Crawler, HalfCheetahBulletEnv,
HopperBulletEnv, LunarLander, LunarLanderContinuous, Markov Decision 6x6, Minitaur, Minitaur with Duck,
MountainCar, MountainCarContinuous, Pong, Navigation, Reacher, Snake, Tennis, Waker2DBulletEnv.

Four environments (Navigation, Crawler, Reacher, Tennis) are solved in the framework of the
Udacity Deep Reinforcement Learning Nanodegree Program.

  • Monte-Carlo Methods
    In Monte Carlo (MC), we play episodes of the game until we reach the end, we grab the rewards
    collected on the way and move backward to the start of the episode. We repeat this method
    a sufficient number of times and we average the value of each state.
  • Temporal Difference Methods and Q-learning
  • Reinforcement Learning in Continuous Space (Deep Q-Network)
  • Function Approximation and Neural Network
    The Universal Approximation Theorem (UAT) states that feed-forward neural networks containing a
    single hidden layer with a finite number of nodes can be used to approximate any continuous function
    provided rather mild assumptions about the form of the activation function are satisfied.
  • Policy-Based Methods, Hill-Climbing, Simulating Annealing
    Random-restart hill-climbing is a surprisingly effective algorithm in many cases. Simulated annealing is a good
    probabilistic technique because it does not accidentally think a local extrema is a global extrema.
  • Policy-Gradient Methods, REINFORCE, PPO
    Define a performance measure J(\theta) to maximaze. Learn policy paramter \theta throgh approximate gradient ascent.
  • Actor-Critic Methods, A3C, A2C, DDPG, TD3, SAC
    The key difference from A2C is the Asynchronous part. A3C consists of multiple independent agents(networks) with
    their own weights, who interact with a different copy of the environment in parallel. Thus, they can explore
    a bigger part of the state-action space in much less time.
  • Forward-Looking Actor or FORK
    Model-based reinforcement learning uses the model in a sophisticated way, often based
    on deterministic or stochastic optimal control theory to optimize the policy based
    on the model. FORK only uses the system network as a blackbox to forecast future states,
    and does not use it as a mathematical model for optimizing control actions.
    With this key distinction, any model-free Actor-Critic algorithm with FORK remains
    to be model-free.

Projects, models and methods

AntBulletEnv, Soft Actor-Critic (SAC)

BipedalWalker, Twin Delayed DDPG (TD3)

BipedalWalker, PPO, Vectorized Environment

BipedalWalker, Soft Actor-Critic (SAC)

BipedalWalker, A2C, Vectorized Environment

CarRacing with PPO, Learning from Raw Pixels

CartPole, Policy Based Methods, Hill Climbing

CartPole, Policy Gradient Methods, REINFORCE

Cartpole, DQN

Cartpole, Double DQN

HalfCheetahBulletEnv, Twin Delayed DDPG (TD3)

HopperBulletEnv, Twin Delayed DDPG (TD3)

HopperBulletEnv, Soft Actor-Critic (SAC)

LunarLander-v2, DQN

LunarLanderContinuous-v2, DDPG

Markov Decision Process, Monte-Carlo, Gridworld 6x6

MinitaurBulletEnv, Soft Actor-Critic (SAC)

MinitaurBulletDuckEnv, Soft Actor-Critic (SAC)

MountainCar, Q-learning

MountainCar, DQN

MountainCarContinuous, Twin Delayed DDPG (TD3)

MountainCarContinuous, PPO, Vectorized Environment

Pong, Policy Gradient Methods, PPO

Pong, Policy Gradient Methods, REINFORCE

Snake, DQN, Pygame

Udacity Project 1: Navigation, DQN, ReplayBuffer

Udacity Project 2: Continuous Control-Reacher, DDPG, environment Reacher (Double-Jointed-Arm)

Udacity Project 2: Continuous Control-Crawler, PPO, environment Crawler

Udacity Project 3: Collaboration_Competition-Tennis, Multi-agent DDPG, environment Tennis

Walker2DBulletEnv, Twin Delayed DDPG (TD3)

Walker2DBulletEnv, Soft Actor-Critic (SAC)

Projects with DQN and Double DQN

Projects with PPO

Projects with TD3

Projects with Soft Actor-Critic (SAC)

BipedalWalker, different models

CartPole, different models

For more links

  • on Policy-Gradient Methods, see 1, 2, 3.
  • on REINFORCE, see 1, 2, 3.
  • on PPO, see 1, 2, 3, 4, 5.
  • on DDPG, see 1, 2.
  • on Actor-Critic Methods, and A3C, see 1, 2, 3, 4.
  • on TD3, see 1, 2, 3
  • on SAC, see 1, 2, 3, 4, 5
  • on A2C, see 1, 2, 3, 4, 5

My articles on TowardsDataScience

Videos I have developed within the above projects

deep-reinforcement-learning-algorithms's People

Contributors

rafael1s avatar

Stargazers

Manas Mishra avatar  avatar Wang Xi avatar Uday talwar avatar Zheng Zeng avatar Vedant Humbe avatar Adesh Pal avatar  avatar Chen ZhiGang avatar Musfirat Hossain avatar  avatar  avatar Lance avatar  avatar Bhavesh Achhada avatar AlexiosChiu avatar Doan Gia Bao avatar  avatar Runhua Deng avatar  avatar LAMGARI-ASMAE avatar Aditya Suman avatar dashboard avatar Carrick Cheah avatar Chen Zerui avatar Dinh Quoc Dat avatar  avatar Ramom Filho avatar  avatar thanh avatar Duong Cong Cuong avatar  avatar Bllue avatar  avatar  avatar Sysancco avatar tianzhenjia avatar 五羖浪人 avatar Ankit Kumar avatar Jinx avatar ^ↀᴥↀ^ avatar  avatar  avatar  avatar  avatar Chang Liu avatar  avatar Mohaned-Dz avatar Apurva Umredkar avatar  avatar  avatar Rayen Korbi avatar 渣渣辉ge avatar Abhijit Chaki avatar  avatar Robert W. Baumgartner avatar KAI avatar Amirreza avatar  avatar Muhammad Kaleem Ullah avatar  avatar 青年晚报 avatar Zhuojin LYU avatar  avatar Rahul Vimalkanth avatar  avatar Mohsen avatar  avatar Edmond avatar  avatar Prithvi Raj avatar Aishwarya Mundley avatar  avatar minjae kim avatar Ujwal avatar jiaming  Pu avatar Iterate avatar Ankit Sharma avatar Ricardo Correa avatar p_func avatar  avatar annalhq avatar FU, Manqing avatar Dennis Riungu Muticia avatar Jordi Pozo Catà avatar Eduardo Santana avatar  avatar Anirudh Muthuswamy avatar Matheus Izidio avatar Cheng Guo avatar Salmanul Faris avatar Adesh Pratap Singh avatar Nai Fan avatar  avatar Muhammed  avatar  avatar  avatar platonb avatar  avatar Onion_Nang_robot avatar

Watchers

whgreate avatar Max Olifer avatar  avatar  avatar Nishant Kumar avatar Ashutosh Tiwari avatar Xu Wang avatar sharathnatraj avatar  avatar  avatar Dhyey Thumar avatar paper2code - bot avatar Richard Hoffmann avatar  avatar Matt Jay avatar  avatar

deep-reinforcement-learning-algorithms's Issues

How to remove the environment logging in the console?

I have a lot of this kind of logging when a new episode begins: Track generation: 1220..1529 -> 309-tiles track

I noticed that there is no such kind of logging in your console in the .ipynb page. Can you tell me how to remove them? Thank you very much.

Why did you remove the death penalty for solving CarRacing with PPO from raw pixels?

Hi, I've been looking through your code as a reference to figure out how to solve CarRacing-v0.

image

Mine works up to a point then has a catastrophic performance crash.
The only difference I can find between my version and yours is that When the unwrapped environment is done (fails) the agent gets a big negative reward.
You removed this in your wrapper, and I don't understand why.

What's the significance of offsetting the reward there?

Reward shaping not removed in evaluation in CarRacing-From-Pixels-PPO

Hi,

The figure and log in README shows scores >1000, which due to the CarRacing's design, is not quite possible.
It turns out that the reward shaping in Wrapper.step() is not removed in evaluation and that leads to incorrect results.
Commenting out relevant lines, I got an average score of 820 over 100 episodes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.