Comments (13)
Hum I manually reverted this commit cause I had some conflicts (and it was only 5 lines changed).
I just launched now the experiment on Space Invaders.
I don't really know how I could share you the new branch I just made for sanity check?
Edited: I come just to upgrade my plotly version from 2.5.1 to 3.1.0 to make it work.
from rainbow.
Here is the change I made to revert the gradient clipping.
The experiment is currently ongoing.
https://github.com/marintoro/Rainbow/commits/master
from rainbow.
Thanks for checking! I've dug through my original results for 1.0 and I've got the models/plots for Beam Rider, Enduro, Ms. Pac-Man and Seaquest. Performance for Seaquest has also dropped for 1.1, don't have 1.1 results for any other games. Got rid of my suboptimal 1.0 Frostbite results after I got expected results on 1.1. But for some reason I've lost my Space Invaders results, so I'm now confused as to where I managed to get those results. Currently I'm running with pytorch 0.4.1, atari_py 0.1.1 and opencv-python 3.4.2.16, but I've not been tracking library versions. I always just run the same seed and one experiment. Let's leave this issue open until the results can be replicated. I'm currently seeing if removing gradient clipping from 1.1 will get Space Invaders to work, but that'll take a week to check if I can even hang on to my current GPU. If you still have the trained model from this experiment a quick sanity check would be to load it but use epsilon=0.001 for testing.
from rainbow.
So I just did the sanity check you told me with epsilon=0.001 for testing and it doesn't really change anything (I don't got the exact same result but almost).
My library versions are :
atari_py : 0.1.1
torch : 0.4.0
opencv-python : 3.4.0.12
I don't expect those minor difference to be a problem but I may be wrong...
Actually I got a pretty good CPU/GPU available right now and I could make some other test (but I really would like to get the working version on Space Invaders! This is my sanity check for my own multi-agent version ^^)
from rainbow.
I would have been using torch 0.4.0 at the time, and I don't expect opencv-python to change a lot on a minor version (I may have been on the same version as you anyway).
If you have compute to test then taking master and running Space Invaders with priority weights not included in the new priorities would be the next thing to check.
epsilon=0.001 was needed for Pong to report the right scores, and using log softmax in training prevents numerical problems (had these in Q*bert) so I'm pretty sure those are needed.
from rainbow.
Not sure to have fully understand what you mean there.
I should run the current master on Space Invaders but removing which part? (which commits are involved particularly?)
from rainbow.
Reverting d6538df and running Space Invaders would be a good test.
from rainbow.
Fork this repo, make a branch with the changes and just point to it in this issue.
from rainbow.
I am currently at 14M steps and it looks really similar to everything else... (but yes 14M is not enough, it's still running). But at 14M on your graph, we are already suppose to get rewards around 10000 and there it's still barely 3000....
Here are the current rewards and Q-values
from rainbow.
I would say wait to about 18M just in case, but it doesn't look promising. Of all the hyperparameters I might have changed, I might have used --noisy-std=0.5
, but can't think of anything else. I'm looking at what has changed between 1.0 and master that might be the culprit, but I'm really not sure. There's changing the noise sampling at a8d01b8, but those should be equivalent. There's disabling cuDNN, but if adding that back worked that would be troubling because it should just be a slightly nondeterministic version of what's happening now.
from rainbow.
GREAT NEWS! It looks way more promising at 20M steps!!! It's the first time I got score above 3k (and it's even sometimes hit more than 20k!!!)
By the way I really don't think the current version 1.0 work on Space Invaders (c.f. the first post of this issue) or at least on my computer with all my library versions it doesn't (but like it's working with the revert_grad_clipping version on my computer with the exact same library versions, I think that it's just v1.0 which is not working as expected...).
Now I think you should try with the current master to see if it works or not on Space Invaders, to see if there is really a problem with the grad clipping?
I must admit that now that I got a version working on Space Invaders, I will work back to my multi-agent version (and maybe everything will work just fine now :p).
Edited: It's pretty funny, it really started to play way better 1M step after my first screenshot ^^
Re-edited: I stopped the training, I updated the reward (just 2M step more...), it looks still good (with one time hitting 50k!)
from rainbow.
Really glad that you waited, I would say that's good enough. Below I have a run from master without gradient clipping, and what I believe is a run from 1.1 stagnating at 3k, which together indicates that what you have now is the correct setting, but I'll need to check that properly.
from rainbow.
Closing because an agent trained from master
gets crazy high scores.
from rainbow.
Related Issues (20)
- Infinite loop in ReplayMemory._get_sample_from_segment HOT 1
- Noob Question : Need help running the code. It seems to be running for forever. HOT 1
- rainbow for multiagent setting HOT 1
- Policy and reward function HOT 4
- ploting the result HOT 1
- To run a demo HOT 2
- disable env.reset() after every episode HOT 5
- --learn-start HOT 2
- Delayed reward HOT 1
- load a model HOT 1
- Explanation of Q statistics in plots for "val_mem"? HOT 1
- [question] Training speed HOT 3
- Running Rainbow on a Cluster HOT 1
- IndexError: index 262141 is out of bounds for axis 0 with size 231071 HOT 2
- Evaluate the pretrained model HOT 1
- The problem about training with GPU HOT 1
- Stuck in memory._retrieve when batch size > 32 HOT 1
- A problem about one game in ALE cannot be trained HOT 1
- Montezuma's revenge - has this been tried using this codebase? HOT 2
- Data Effiecient Rainbow with Skiing does not work HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rainbow.