Coder Social home page Coder Social logo

tinkoff-ai / lb-sac Goto Github PK

View Code? Open in Web Editor NEW
18.0 18.0 1.0 927 KB

Official implementation for "Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size", NeurIPS 2022, Offline RL Workshop

License: Apache License 2.0

Dockerfile 2.92% Python 97.08%
deep-reinforcement-learning ensemble-learning offline-reinforcement-learning pytorch-implementation

lb-sac's People

Contributors

howuhh avatar vkurenkov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

wuti0525

lb-sac's Issues

Some questions about the results of the LB-SAC paper report:

  1. When I conducted walker2d-full-replay-v2 experiment, final normalized score was around 106.5, while 109.1 was reported in the paper. Although there is not a big difference, due to my improvement of this algorithm, The result is around 108. I want to know whether this result really improves the algorithm performance, or whether I should write the reproduced results together in the paper, similar to the halfcheetah-medium-expert-v2 results, although the reproduced results are slightly worse than those in the paper. However, my improved results are still slightly worse than the results reported in the paper, but higher than the original algorithm results reproduced by myself. Since this algorithm is still the most advanced of the model-free algorithms, the room for performance improvement is very limited, so small improvements to it are also very important to me.
  2. About the calculation method of normalized score for your two reports, final normalized score refers to the final policy score, which I understand the average result of the algorithm in the last step on the 4 random seeds. For Normalized maximum scores, I understand it in two ways:
    1. Record the best score on each seed for average.
    2. Maximum average result obtained in the same steps.
    Do you average first and then look for the best score or do you look for the best score first and then average?
    Looking forward to your early reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.