Comments (4)
I think this was caused by a bug in my code. I somehow cut off the last linear layer of the CNN, but everything trained well without error (for a while), and there were just many 0's at the end of the logit vector. Once I fixed this, ABML trained without error. Sorry for the confusion
from few_shot_meta_learning.
Hi Jeff,
Thank you for letting me know.
For the NaN error, what I can think of is the KL divergence (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/_utils.py#L165) (at L166, s1_vec
is divided, hence, if s1_vec
is zero, it would cause NaN). Another point is the loss prior that regularizes the meta-parameter std (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/Abml.py#L77). If tau = 0
, then the log-likelihood of the Gamma distribution for tau
would be undefined. I wonder if you can put some print
commands to print out the values of the KL divergence and the loss prior to see which one causes the NaN. You also want to check the hyper-parameters used for the Gamma prior. In the ABML paper, they only ran for mini-ImageNet, and hence, I hard-coded those values (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/Abml.py#L20). It might cause problems when running on Omniglot.
from few_shot_meta_learning.
It seemed very random and every time I put a print statement in, the nans would come from somewhere else. I finally narrowed it down to the weight sample. The log std of some of the weights were really high such that it caused an inf
when exponentiating it https://github.com/cnguyen10/few_shot_meta_learning/blob/master/_utils.py#L240
I have seen this happen with other Bayesian models when using exp instead of something like c + (1 - c) softplus(log_sigma)
. I guess that changing the KL might help also but I am not sure which way to move it.
I clamped the log_std parameters to be between (1e-8, 5) and it seems to stably train, but there is probably a better solution
from few_shot_meta_learning.
This problem may be caused by using a large meta_lr
for log_std
, resulting in an overshooting for some values of log_std
. The current implementation uses the same meta_lr
for both the mean
and log_std
. It is properly a good idea to separate to have 2 learning rates for the 2 meta-parameters.
from few_shot_meta_learning.
Related Issues (15)
- Some questions about this code. HOT 1
- Loss is NaN in PLATIPUS HOT 2
- Platipus loss function potentially doesn't match paper HOT 2
- Question about the implementation of VAMPIRE HOT 4
- test in Platius model HOT 2
- NaN loss when training with sine HOT 4
- error in Platipus model with sineline data source
- Models not training HOT 4
- Potential Problem of the loss function in ABML HOT 2
- Loss function for implementation of BMAML HOT 2
- Question about the initialization of theta0 in abml HOT 4
- First order approximate typo? HOT 1
- Consultation about the code HOT 1
- Regression code HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from few_shot_meta_learning.