Coder Social home page Coder Social logo

awesome-visual-rl's Introduction

Awesome Visual RL Awesome GitHub stars GitHub forks Hits 知识共享许可协议

This is a collection of research papers on Visual Reinforcement Learning (Visual RL) and other vision-related reinforcement learning.

If you find some ignored papers, feel free to open issues, or email Qi Wang / GuoZheng Ma / Yuan Pu. Contributions in any form to make this list more comprehensive are welcome. 📣📣📣

If you find this repository useful, please consider giving us a star 🌟.

Feel free to share this list with others! 🥳🥳🥳

Papers

format:
- publisher **[abbreviation of proposed model]** title [paper link] [code link]

🔷 Model-Based   🔶 Model-Free

2024

  • 🔶 ICLR 2024 Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages [Paper] [Torch Code]
  • 🔷 ICLR 2024 [TD-MPC2] TD-MPC2: Scalable, Robust World Models for Continuous Control [Paper] [Torch Code]
  • 🔶 ICLR 2024 [DrM] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization [Paper]
  • 🔶 ICLR 2024 Oral [PTGM] Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning [Paper] [Torch Code]
  • 🔷 ICLR 2024 [DreamSmooth] DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing [Paper]
  • 🔷 ICLR 2024 Oral [R2I] Mastering Memory Tasks with World Models [Paper] [JAX Code]
  • 🔶 ICLR 2024 Spotlight [PULSE] Universal Humanoid Motion Representations for Physics-Based Control [Paper] [Torch Code]
  • 🔷 ICML 2024 Oral [Dynalang] Learning to Model the World With Language [Paper] [JAX Code]
  • 🔷 ICML 2024 [HarmonyDream] HarmonyDream: Task Harmonization Inside World Models [Paper] [JAX Code]
  • 🔶 ICML 2024 Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning [Paper]
  • 🔶 ICML 2024 [BeigeMaps] BeigeMaps: Behavioral Eigenmaps for Reinforcement Learning from Images [Paper]
  • 🔶 RLC 2024 [SADA] A Recipe for Unbounded Data Augmentation in Visual Reinforcement Learning [Paper][Torch Code]
  • 🔷 arXiv 2024.5 [Puppeteer] Hierarchical World Models as Visual Whole-Body Humanoid Controllers [Paper] [Torch Code]

2023

  • 🔶 ICLR 2023 [CoIT] On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning [Paper] [Torch Code]
  • 🔷 ICLR 2023 [MoDem] MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations [Paper] [Torch Code]
  • 🔶 ICLR 2023 [TED] Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning [Paper] [Torch Code]
  • 🔶 ICLR 2023 Spotlight [VIP] VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training [Paper] [Torch Code]
  • 🔷 ICML 2023 Oral Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [Paper]
  • 🔶 ICML 2023 On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline [Paper] [Torch Code]
  • 🔶 ICCV 2023 [CG2A] Improving Generalization in Visual Reinforcement Learning via Conflict-aware Gradient Agreement Augmentation [Paper]
  • 🔶 NeurIPS 2023 [HAVE] Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2023 [PIE-G] Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2023 Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2023 [TACO] TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2023 [CMID] Conditional Mutual Information for Disentangled Representations in Reinforcement Learning [Paper][Torch Code]
  • 🔷 arXiv 2023.1 [DreamerV3] Mastering Diverse Domains through World Models [Paper][JAX Code][Torch Code]
  • 🔷 arXiv 2023.5 [CoWorld] Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning [Paper]

2022

  • 🔶 ICLR 2022 [DrQ-v2] Local Feature Swapping for Generalization in Reinforcement Learning [Paper][Torch Code]
  • 🔶 ICLR 2022 [CLOP] Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning [Paper][Torch Code]
  • 🔷 ICML 2022 [TD-MPC] Temporal Difference Learning for Model Predictive Control [Paper][Torch Code]
  • 🔶 ICML 2022 [DRIBO] DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck [Paper][Torch Code]
  • 🔷 ICML 2022 [DreamerPro] DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations [Paper][TF Code]
  • 🔶 IJCAI 2022 [CCLF] CCLF: A Contrastive-Curiosity-Driven Learning Framework for Sample-Efficient Reinforcement Learning [Paper]
  • 🔶 IJCAI 2022 [TLDA] Don’t Touch What Matters: Task-Aware Lipschitz Data Augmentation for Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2022 [PIE-G] Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2022 Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2022 Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels? [Paper]
  • 🔶 NeurIPS 2022 [A2LS] Reinforcement Learning with Automated Auxiliary Loss Search [Paper][Torch Code]
  • 🔶 NeurIPS 2022 [MLR] Mask-based Latent Reconstruction for Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2022 [SRM] Spectrum Random Masking for Generalization in Image-based Reinforcement Learning [Paper][Torch Code]
  • 🔷 NeurIPS 2022 Deep Hierarchical Planning from Pixels. [Paper][TF Code]
  • 🔷 NeurIPS 2022 Spotlight [Iso-Dream] Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models [Paper][Torch Code]
  • 🔶 TPAMI 2022 [M-CURL] Masked Contrastive Representation Learning for Reinforcement Learning [Paper]
  • 🔷 CoRL 2022 [DayDreamer] DayDreamer: World Models for Physical Robot Learning [Paper] [TF Code]

2021

  • 🔶 ICLR 2021 Spotlight [DrQ] Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels [Paper][Torch Code]
  • 🔶 ICLR 2021 [MixStyle] Domain Generalization with MixStyle [Paper][Torch Code]
  • 🔶 ICLR 2021 [SPR] Data-Efficient Reinforcement Learning with Self-Predictive Representations [Paper][Torch Code]
  • 🔷 ICLR 2021 [DreamerV2] Mastering Atari with Discrete World Models [Paper][TF Code][Torch Code]
  • 🔶 ICML 2021 [SECANT] Self-Expert Cloning for Zero-Shot Generalization of Visual Policies [Paper] [Torch Code]
  • 🔶 NeurIPS 2021 [PlayVirtual] Augmenting Cycle-Consistent Virtual Trajectories for Reinforcement Learning [Paper][Torch Code]
  • 🔶 NeurIPS 2021 [EXPAND] Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation [Paper]
  • 🔶 NeurIPS 2021 [SVEA] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation [Paper] [Torch Code]
  • 🔶 NeurIPS 2021 [UCB-DrAC] Automatic Data Augmentation for Generalization in Reinforcement Learning [Paper] [Torch Code]

2020

  • 🔷 ICML 2020 [Plan2Explore] Planning to Explore via Self-Supervised World Models [Paper][TF Code][Torch Code]
  • 🔶 ICML 2020 [CURL] CURL: Contrastive Unsupervised Representations for Reinforcement Learning [Paper] [Torch Code]
  • 🔷 ICLR 2020 [DreamerV1] Dream to Control: Learning Behaviors by Latent Imagination [Paper][TF Code][Torch Code]

2018

  • 🔷 NeurIPS 2018 Oral World Models [Paper]

Other Vision-Related Reinforcement Learning Papers

2024

  • 🔷 ICLR 2024 Oral Predictive auxiliary objectives in deep RL mimic learning in the brain [Paper]
  • 🔶 ICLR 2024 Oral [METRA] METRA: Scalable Unsupervised RL with Metric-Aware Abstraction [Paper] [Torch Code]
  • 🔶 ICLR 2024 Spotlight Selective Visual Representations Improve Convergence and Generalization for Embodied AI [Paper] [Torch Code]
  • 🔶 ICLR 2024 Spotlight Towards Principled Representation Learning from Videos for Reinforcement Learning [Paper] [Torch Code]

Contributors

pic
Qi Wang

Shanghai Jiao Tong University

pic
GuoZheng Ma

Tsinghua University

pic
Yuan Pu
Shanghai Artificial Intelligence Laboratory (OpenDILab)

(Back to top)

awesome-visual-rl's People

Contributors

qiwang067 avatar guozheng-ma avatar puyuan1996 avatar

Stargazers

lismin avatar ZH1995 avatar  avatar Duncan avatar  avatar  avatar yan yin avatar Kai avatar  avatar ZEM17 avatar  avatar Yang Lee avatar YueJK avatar Yitao Zheng avatar Liang Xu avatar  avatar  avatar

Watchers

 avatar

Forkers

puyuan1996

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.