Coder Social home page Coder Social logo

bicv / sparsehebbianlearning Goto Github PK

View Code? Open in Web Editor NEW
8.0 8.0 2.0 1.2 GB

unsupervised learning of natural images -- à la SparseNet.

Home Page: https://laurentperrinet.github.io/publication/perrinet-19-hulk/

License: Other

Jupyter Notebook 99.82% Python 0.18% Shell 0.01%
hebbian-learning matching-pursuit neural-networks neuroscience sparse sparse-coding unsupervised-learning unsupervised-learning-algorithms

sparsehebbianlearning's People

Contributors

angelofranciosini avatar laurentperrinet avatar victorboutin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

jiths edwintyh

sparsehebbianlearning's Issues

overflow in index when using fast P_cum

be careful
the " +1 " in

ceil = P_cum.ravel()[indices + 1 - 2 * (p_c == 1) + stick]

generates an error so that when indices[-1]=511 you get the error

IndexError: index 51200 is out of bounds for axis 1 with size 51200

it also means that when indices[0]=511 then ceil[0]=0 instead of ceil[0]=1

implement Sandin et al (2016) by a binary gain vector

Is it possible in our current implementation to implement the heuristic behind Equi-probable MP by using a binary vector gain = (P < p) where

This basically blocks those which are "too rich" and lets the other learn. For a given input, the equilibrium probability should be p_eq = L0_sparseness / M such that we may use something like p = p_eq * 1.1

redundant get_data function

the get_data function is present in the main scripts and in the tools scripts. We should just keep one... I suggest the one in the tools script.

implement a gradient descent on the homeostasis cost

the homeostasis using histogram equalization works and is great mathematically, but it slows down things... we could as well do a gradient descent of an homeostatic cost - hence have something similar as the implementation of the homeostasis in Olshausen, but on the probability of activation instead of the variance.

do_disk

to avoid problems on the corners, use a disk to remove that pixels.

in https://github.com/bicv/SLIP/blob/master/SLIP/SLIP.py#L219

we have an example of how to do it:


        self.x, self.y = np.mgrid[-1:1:1j*self.pe.N_X, -1:1:1j*self.pe.N_Y]
        self.R = np.sqrt(self.x**2 + self.y**2)
        if not 'mask_exponent' in self.pe.keys(): self.pe.mask_exponent = mask_exponent
        self.mask = ((np.cos(np.pi*self.R)+1)/2 *(self.R < 1.))**(1./self.pe.mask_exponent)
        

write a cost for sparsenet homeostasis rule

In the original SparseNet algorithm, the learning rule for the gain vector follows an heuristics which could be formulated in a classical formulation using an "homeostatic cost function".

The advantage would be that it would generate the different rules for coding, learning and homeostatic gain combined from one generic formulation of the gradient descent.

One patches gets all the training

This happened with all the training I left running this weekend

screen shot 2017-12-04 at 16 37 11

Did you get anything like that? Do you think it was because of the Fast Pcum function?

Define scope

We should concentrate on a few main points:

  • a global unsupervised learning cost, and how we derive three costs (coding, learning, homeostasis)
  • how we minimise the homeostatic cost (exact, gradient descent)
  • how we do the learning and how this is relatively independent from the coding (lasso, MP, ...)
  • quantification of the learning (not just the beauty of the filters)

So we should at the moment avoid talking about:

  • dynamic streams,
  • convolutions
  • hierarchies

record statistics during the learning phase

The learning appears to have some analogies with a complex dynamical system with different phases as a function of the parameters. We need to trace the evolution of some parameters during the learning.

One solution is to introduce a record_each parameter to the learn_dico function - if set to 0, it does nothing, else it records every record_each step the statistics during the learning phase (variance and kurtosis of coefficients to start off).

Test fast homeo

To get to that branch, use

git checkout fast_homeo

Then, we can experiment any different crazy idea

Some things to check, and which can be different separate issues:

  • benchmark with respect to the full homeostasis - potentially with different codes
  • benchmark to find optimal learning parameters
  • derive the "learning rule" from theory
  • compare results (dico) quantitatively

implement epochs

to show the robustness of the learning we should iterate the learning multiple times in different epochs

accept 1D data

  • test with shape = (1, 256)
  • make the plot function more intelligent

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.