bicv / sparsehebbianlearning Goto Github PK
View Code? Open in Web Editor NEWunsupervised learning of natural images -- à la SparseNet.
Home Page: https://laurentperrinet.github.io/publication/perrinet-19-hulk/
License: Other
unsupervised learning of natural images -- à la SparseNet.
Home Page: https://laurentperrinet.github.io/publication/perrinet-19-hulk/
License: Other
be careful
the " +1 " in
ceil = P_cum.ravel()[indices + 1 - 2 * (p_c == 1) + stick]
generates an error so that when indices[-1]=511
you get the error
IndexError: index 51200 is out of bounds for axis 1 with size 51200
it also means that when indices[0]=511
then ceil[0]=0
instead of ceil[0]=1
The Adam method adapts the step size depending on the amplitude of the gradient:
https://arxiv.org/pdf/1412.6980.pdf
proves to be faster the traditional fixed-step on-line stochastic gradient descent
Is it possible in our current implementation to implement the heuristic behind Equi-probable MP by using a binary vector gain = (P < p)
where
This basically blocks those which are "too rich" and lets the other learn. For a given input, the equilibrium probability should be p_eq = L0_sparseness / M
such that we may use something like p = p_eq * 1.1
the get_data
function is present in the main scripts and in the tools scripts. We should just keep one... I suggest the one in the tools script.
the homeostasis using histogram equalization works and is great mathematically, but it slows down things... we could as well do a gradient descent of an homeostatic cost - hence have something similar as the implementation of the homeostasis in Olshausen, but on the probability of activation instead of the variance.
one possible implementation is along that in Sandin et al 2016
perhaps doable there in one line : https://github.com/bicv/SHL_scripts/blob/master/shl_scripts/shl_learn.py#L442
then we need a way to tell to the algo to chose one method or the other (without making the code even more baroque, sigh...)
it makes no sense of coding for DC componenets: filter everithing below a certain frequency given by the patches' size
potential source for novel equations = "A tutorial on the free-energy framework for modelling perception and learning" by Rafal Bogacz, http://www.sciencedirect.com/science/article/pii/S0022249615000759
to avoid problems on the corners, use a disk to remove that pixels.
in https://github.com/bicv/SLIP/blob/master/SLIP/SLIP.py#L219
we have an example of how to do it:
self.x, self.y = np.mgrid[-1:1:1j*self.pe.N_X, -1:1:1j*self.pe.N_Y]
self.R = np.sqrt(self.x**2 + self.y**2)
if not 'mask_exponent' in self.pe.keys(): self.pe.mask_exponent = mask_exponent
self.mask = ((np.cos(np.pi*self.R)+1)/2 *(self.R < 1.))**(1./self.pe.mask_exponent)
In the original SparseNet algorithm, the learning rule for the gain vector follows an heuristics which could be formulated in a classical formulation using an "homeostatic cost function".
The advantage would be that it would generate the different rules for coding, learning and homeostatic gain combined from one generic formulation of the gradient descent.
We need to find a more generic and large database than serre07_*
or kodakdb
- we should use that databases that are provided by machine learning , for instance:
may be we should include some part of the https://github.com/mttk/STL10/blob/master/stl10_input.py file into our scripts ?
best would be even more generic images from instagram etc...
Show that changing the gain changes the probability of firing
P_i versus p(a_i)
Make a theorem to predict outcome of the algorithm assuming existence of a solution
We should concentrate on a few main points:
So we should at the moment avoid talking about:
The learning appears to have some analogies with a complex dynamical system with different phases as a function of the parameters. We need to trace the evolution of some parameters during the learning.
One solution is to introduce a record_each
parameter to the learn_dico
function - if set to 0, it does nothing, else it records every record_each
step the statistics during the learning phase (variance and kurtosis of coefficients to start off).
To get to that branch, use
git checkout fast_homeo
Then, we can experiment any different crazy idea
Some things to check, and which can be different separate issues:
The code just works but could be improved by:
Some sources of inspiration:
to show the robustness of the learning we should iterate the learning multiple times in different epochs
1/f stats comes from averaging many images
one possibility is to average, something like
dictionary = np.array_split(X_train, n_dictionary).mean(axis=??)
around line https://github.com/bicv/SHL_scripts/blob/master/shl_scripts/shl_learn.py#L300
validate with a notebook in probe
to re-generate figure 14.1 from https://www.invibe.net/LaurentPerrinet/Publications/Perrinet15bicv
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.