Coder Social home page Coder Social logo

Comments (4)

jeffwillette avatar jeffwillette commented on May 28, 2024 1

I think this was caused by a bug in my code. I somehow cut off the last linear layer of the CNN, but everything trained well without error (for a while), and there were just many 0's at the end of the logit vector. Once I fixed this, ABML trained without error. Sorry for the confusion

from few_shot_meta_learning.

cnguyen10 avatar cnguyen10 commented on May 28, 2024

Hi Jeff,
Thank you for letting me know.
For the NaN error, what I can think of is the KL divergence (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/_utils.py#L165) (at L166, s1_vec is divided, hence, if s1_vec is zero, it would cause NaN). Another point is the loss prior that regularizes the meta-parameter std (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/Abml.py#L77). If tau = 0, then the log-likelihood of the Gamma distribution for tau would be undefined. I wonder if you can put some print commands to print out the values of the KL divergence and the loss prior to see which one causes the NaN. You also want to check the hyper-parameters used for the Gamma prior. In the ABML paper, they only ran for mini-ImageNet, and hence, I hard-coded those values (https://github.com/cnguyen10/few_shot_meta_learning/blob/master/Abml.py#L20). It might cause problems when running on Omniglot.

from few_shot_meta_learning.

jeffwillette avatar jeffwillette commented on May 28, 2024

It seemed very random and every time I put a print statement in, the nans would come from somewhere else. I finally narrowed it down to the weight sample. The log std of some of the weights were really high such that it caused an inf when exponentiating it https://github.com/cnguyen10/few_shot_meta_learning/blob/master/_utils.py#L240

I have seen this happen with other Bayesian models when using exp instead of something like c + (1 - c) softplus(log_sigma). I guess that changing the KL might help also but I am not sure which way to move it.

I clamped the log_std parameters to be between (1e-8, 5) and it seems to stably train, but there is probably a better solution

from few_shot_meta_learning.

cnguyen10 avatar cnguyen10 commented on May 28, 2024

This problem may be caused by using a large meta_lr for log_std, resulting in an overshooting for some values of log_std. The current implementation uses the same meta_lr for both the mean and log_std. It is properly a good idea to separate to have 2 learning rates for the 2 meta-parameters.

from few_shot_meta_learning.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.