Comments (3)
@michaelschaarschmidt w.r.t. your comment in gitter:
in general, TRPO has two potential instabilities, the gradient computation on the fvp and the conjugate gradient, but the cg + linesearch should fail gracefully by not updating when it fails to find an improved solution
I'm not seeing it fail gracefully when this is hit. It seems as though a NaN or other unstable update may be making its way through the graph update, as I see my agent behavior change significantly whenever this is encountered.
/home/tom/src/tensorforce/tensorforce/models/trpo_model.py:161: RuntimeWarning: invalid value encountered in sqrt
lagrange_multiplier = np.sqrt(shs / self.max_kl_divergence)
Here are some results from my custom environment (do nothing is zero reward), which caps episodes at 100 steps. The agent essentially stops acting after encountering this:
Finished episode 2060 after 26 timesteps (reward: 2.13)
Finished episode 2061 after 5 timesteps (reward: 2.02)
Finished episode 2062 after 17 timesteps (reward: 2.09)
Finished episode 2063 after 65 timesteps (reward: 2.53)
Finished episode 2064 after 100 timesteps (reward: -4.03)
Finished episode 2065 after 21 timesteps (reward: 2.08)
Finished episode 2066 after 11 timesteps (reward: 2.03)
Finished episode 2067 after 28 timesteps (reward: 2.08)
Finished episode 2068 after 21 timesteps (reward: 3.2)
Finished episode 2069 after 5 timesteps (reward: 2.03)
Finished episode 2070 after 53 timesteps (reward: 2.11)
Finished episode 2071 after 15 timesteps (reward: 2.02)
Finished episode 2072 after 8 timesteps (reward: 2.02)
Finished episode 2073 after 26 timesteps (reward: 3.1)
Finished episode 2074 after 4 timesteps (reward: 2.03)
Finished episode 2075 after 100 timesteps (reward: -8.07)
Finished episode 2076 after 26 timesteps (reward: 2.25)
Finished episode 2077 after 6 timesteps (reward: 3.05)
Finished episode 2078 after 14 timesteps (reward: 3.07)
Finished episode 2079 after 54 timesteps (reward: 4.01)
Finished episode 2080 after 11 timesteps (reward: 3.04)
Finished episode 2081 after 100 timesteps (reward: -13.16)
Finished episode 2082 after 63 timesteps (reward: 2.37)
Finished episode 2083 after 18 timesteps (reward: 3.05)
Finished episode 2084 after 27 timesteps (reward: 2.02)
Finished episode 2085 after 3 timesteps (reward: 2.02)
/home/tom/src/tensorforce/tensorforce/models/trpo_model.py:161: RuntimeWarning: invalid value encountered in sqrt
lagrange_multiplier = np.sqrt(shs / self.max_kl_divergence)
Finished episode 2086 after 100 timesteps (reward: 4.0)
Finished episode 2087 after 100 timesteps (reward: 0.0)
Finished episode 2088 after 100 timesteps (reward: 0.0)
Finished episode 2089 after 100 timesteps (reward: 0.0)
Finished episode 2090 after 100 timesteps (reward: 0.0)
Finished episode 2091 after 100 timesteps (reward: 0.0)
Finished episode 2092 after 100 timesteps (reward: 0.0)
Finished episode 2093 after 100 timesteps (reward: 0.0)
Finished episode 2094 after 100 timesteps (reward: 0.0)
Finished episode 2095 after 100 timesteps (reward: 0.0)
from tensorforce.
So the easiest hack that works for now is to just check in the code whether shs
is smaller than zero and if it is ignore the batch. It's not a permanent solution, but it makes the algorithm usable.
Another thing I noticed in continuous state spaces is that the standard deviation of the Gaussian (exploration) noise is not parameterized. That seems like a bad default for this kind of on-policy method. It's an easy fix since the required code in the Gaussian
class is just commented out, but enabling this does not seem possible without low-level adjustments at the moment.
from tensorforce.
So I have a heard time reliably reproducing this (saw it once in 20 runs on 3.6, never in 2.7), so difficult to debug. Skipping update when shs < 0 now in any case.
from tensorforce.
Related Issues (20)
- Gym envirnoment broken: 'dict' object has no attribute 'env_specs HOT 3
- Issues installing Tensorforce from pip on Python 3.10
- is it still active? HOT 2
- How to change epsilon value when using epsilon-greedy policy? HOT 2
- Can I customize the loss function?
- Saver documentation inconsistent with example
- End-to-end data collection and policy updates on the GPU possible with tensorforce
- how to modify the loss function of the value network in PPO
- AttributeError: 'Adam' object has no attribute '_create_all_weights' HOT 3
- Why different models performs the same HOT 1
- AttributeError: type object 'Module' has no attribute '_MODULE_STACK' HOT 1
- tensorforce.exception.TensorforceError: Invalid value for variable argument spec: TensorSpec HOT 1
- Comparison of "online" and "offline" agent-enviroment interactions
- error creating an agent
- TypeError: CCompiler_spawn() got an unexpected keyword argument 'env' HOT 2
- A minimal example of custom Environment fails on protobuf or dtensor import from tensorflow.compat.v2.experimental HOT 6
- How to specify min_value and max_value in a custom environment when shape of the state is a vector? HOT 1
- Does Runner.run perform training given it never invokes agent.experience(...) ? HOT 1
- logging to logdir for tensorboard? HOT 1
- Some issue about PPOAgent update
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorforce.