Coder Social home page Coder Social logo

norse / norse Goto Github PK

View Code? Open in Web Editor NEW
611.0 16.0 78.0 108.65 MB

Deep learning with spiking neural networks (SNNs) in PyTorch.

Home Page: https://norse.github.io/norse/

License: GNU Lesser General Public License v3.0

Dockerfile 0.05% Python 99.67% CMake 0.05% Nix 0.19% Shell 0.04%
spiking-neural-networks deep-learning pytorch tensor machine-learning gpu autograd neural-network neuromorphic pytorch-lightning

norse's Introduction

A deep learning library for spiking neural networks.

Test status chat on Discord DOI

Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and event-driven - a fundamental difference from artificial neural networks. Norse expands PyTorch with primitives for bio-inspired neural components, bringing you two advantages: a modern and proven infrastructure based on PyTorch and deep learning-compatible spiking neural network components.

Documentation: norse.github.io/norse/

1. Getting started

The fastest way to try Norse is via the jupyter notebooks on Google collab.

Alternatively, you can install Norse locally and run one of the included tasks such as MNIST:

python -m norse.task.mnist

2. Using Norse

Norse presents plug-and-play components for deep learning with spiking neural networks. Here, we describe how to install Norse and start to apply it in your own work. Read more in our documentation.

2.1. Installation

We assume you are using Python version 3.8+ and have installed PyTorch version 1.9 or higher. Read more about the prerequisites in our documentation.

MethodInstructionsPrerequisites
From PyPi
pip install norse
Pip
From source
pip install -qU git+https://github.com/norse/norse
Pip, PyTorch
With Docker
docker pull quay.io/norse/norse
Docker
From Conda
conda install -c norse norse
Anaconda or Miniconda

For troubleshooting, please refer to our installation guide, create an issue on GitHub or write us on Discord.

2.2. Running examples

Norse is bundled with a number of example tasks, serving as short, self contained, correct examples (SSCCE). They can be run by invoking the norse module from the base directory. More information and tasks are available in our documentation and in your console by typing: python -m norse.task.<task> --help, where <task> is one of the task names.

  • To train an MNIST classification network, invoke
    python -m norse.task.mnist
  • To train a CIFAR classification network, invoke
    python -m norse.task.cifar10
  • To train the cartpole balancing task with Policy gradient, invoke
    python -m norse.task.cartpole

Norse is compatible with PyTorch Lightning, as demonstrated in the PyTorch Lightning MNIST task variant (requires PyTorch lightning):

python -m norse.task.mnist_pl --gpus=4

2.3. Example: Spiking convolutional classifier

Open In Colab

This classifier is taken from our tutorial on training a spiking MNIST classifier and achieves >99% accuracy.

import torch, torch.nn as nn
from norse.torch import LICell             # Leaky integrator
from norse.torch import LIFCell            # Leaky integrate-and-fire
from norse.torch import SequentialState    # Stateful sequential layers

model = SequentialState(
    nn.Conv2d(1, 20, 5, 1),      # Convolve from 1 -> 20 channels
    LIFCell(),                   # Spiking activation layer
    nn.MaxPool2d(2, 2),
    nn.Conv2d(20, 50, 5, 1),     # Convolve from 20 -> 50 channels
    LIFCell(),
    nn.MaxPool2d(2, 2),
    nn.Flatten(),                # Flatten to 800 units
    nn.Linear(800, 10),
    LICell(),                    # Non-spiking integrator layer
)

data = torch.randn(8, 1, 28, 28) # 8 batches, 1 channel, 28x28 pixels
output, state = model(data)      # Provides a tuple (tensor (8, 10), neuron state)

2.4. Example: Long short-term spiking neural networks

The long short-term spiking neural networks from the paper by G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass (2018) is another interesting way to apply norse:

import torch
from norse.torch import LSNNRecurrent
# Recurrent LSNN network with 2 input neurons and 10 output neurons
layer = LSNNRecurrent(2, 10)
# Generate data: 20 timesteps with 8 datapoints per batch for 2 neurons
data  = torch.zeros(20, 8, 2)
# Tuple of (output spikes of shape (20, 8, 2), layer state)
output, new_state = layer(data)

3. Why Norse?

Norse was created for two reasons: to 1) apply findings from decades of research in practical settings and to 2) accelerate our own research within bio-inspired learning.

We are passionate about Norse: we strive to follow best practices and promise to maintain this library for the simple reason that we depend on it ourselves. We have implemented a number of neuron models, synapse dynamics, encoding and decoding algorithms, dataset integrations, tasks, and examples. Combined with the PyTorch infrastructure and our high coding standards, we have found Norse to be an excellent tool for modelling scaleable experiments and Norse is actively being used in research.

Finally, we are working to keep Norse as performant as possible. Preliminary benchmarks suggest that Norse achieves excellent performance on small networks of up to ~5000 neurons per layer. Aided by the preexisting investment in scalable training and inference with PyTorch, Norse scales from a single laptop to several nodes on an HPC cluster with little effort. As illustrated by our PyTorch Lightning example task.

Read more about Norse in our documentation.

4. Similar work

We refer to the Neuromorphic Software Guide for a comprehensive list of software for neuromorphic computing.

5. Contributing

Contributions are warmly encouraged and always welcome. However, we also have high expectations around the code base so if you wish to contribute, please refer to our contribution guidelines.

6. Credits

Norse is created by

More information about Norse can be found in our documentation. The research has received funding from the EC Horizon 2020 Framework Programme under Grant Agreements 785907 and 945539 (HBP) and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster).

7. Citation

If you use Norse in your work, please cite it as follows:

@software{norse2021,
  author       = {Pehle, Christian and
                  Pedersen, Jens Egholm},
  title        = {{Norse -  A deep learning library for spiking 
                   neural networks}},
  month        = jan,
  year         = 2021,
  note         = {Documentation: https://norse.ai/docs/},
  publisher    = {Zenodo},
  version      = {0.0.7},
  doi          = {10.5281/zenodo.4422025},
  url          = {https://doi.org/10.5281/zenodo.4422025}
}

Norse is actively applied and cited in the literature. We refer to Google Scholar or Semantic Scholar for a list of citations.

8. License

LGPLv3. See LICENSE for license details.

norse's People

Contributors

4iar avatar adelpierre avatar alexei95 avatar almaluna94 avatar chauhant avatar cpehle avatar emijan-kth avatar erikberter avatar h-elbez avatar huizerd avatar jegp avatar josegomesjpg avatar lucablessing avatar muffgaga avatar omahs avatar pugavkomm avatar schmitts avatar thelamentinggirl avatar tobias-fischer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

norse's Issues

Factor out (parts of) the documentation

Right now the generated sphinx documentation is committed to the main repository (mainly so that github.io was easy to setup). It would probably be a good idea to move this either to a separate repository or generate documentation through GitHub actions. I've also slightly modified the alabaster theme, so that modification would need to be hosted aswell.

Make plasticity from correlation measurement more useful

The current implementation of the correlation update is not very useful, because it does scale poorly with the size of the network. A more restricted implementation, which limits itself to single neuron weights would
a good idea.

Collect SoA Image Recognition

There are a number of good results on image recognition datasets with spiking neural networks by now. We should consider to collect and implement some of these architectures as reference.

Biological Neuron Parameters

Right now we don't provide biologically plausible neuron parameters (that is parameters that have realistic units and magnitude), instead we normalise most values to be between 0 and 1 and identify reset and leak voltages for neurons. It would be good to introduce biologically plausible neuron parameter ranges based on for example on the cell models of PyNN. One immediate problem is to determine appropriate weight initialisations

Improve STDP support

Use the code in STDP sensor to

  1. Provide an example on how to use and work with STDP. Both
  • In the docs (what is it?)
  • In a task (prove it works)
  1. Build a STDP module with automatic weight updates

More Flexible Synapse Models

Currently we implement a limited number of synapse models. While this is sufficient for machine-learning oriented approaches it is limiting when it comes to neuroscience inspired approaches. Brian2 solves this by allowing users to specify synapse dynamics as described here https://brian2.readthedocs.io/en/stable/resources/tutorials/2-intro-to-brian-synapses.html. At a minimum we should look into defining some of the other common synapse types and provide a tutorial how to implement additional ones.

Quantization support

Currently models are tested with floating point synaptic weights. Especially in order to support neuromorphic hardware models with quantised weights should be supported. Part of this support is already in pytorch and might be usable as is.

Angular Encoding

Encoding / decoding the position on a circle is a common task useful for robotics applications. Population codes of three or more neurons are a useful way of encoding such a position. The basic idea would be to pick a set of points, together with balls centered at those points, such as in
the following figure:

Cech-example.png
By ProboscideaRubber15 - Own work, CC BY-SA 4.0, Link

To each point one can assign an input source, whose firing rate would follow a bump function centered at that point.

Add better decoding support

Similar to #27 but for decoding. We should have built-in defaults for common decoding schemes, like softmax, first spike, population, angular, etc.
Any others @cpehle ?

Gradient checkpointing

Currently we don't provide any support for checkpointing the forward integration, but instead save the whole forward dynamics for the backwards pass. This scales poorly with a growing sequence length. We should consider support for checkpointing the forward computation to limit memory requirements.

Wrong ConvNet model parameter

Bug in ConvNet model parameter where the model uses the LIFFeedForward cell with model constructor parameter rather than method parameter

Requirements incompatible with google collab

The current requirements.txt specifies versions that are incompatible with google colab:

ERROR: google-colab 1.0.0 has requirement six~=1.12.0, but you'll have six 1.15.0 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.

Since this is supposed to be one of the primary ways in which new users can try out the library, we should adjust the requirements accordingly.

Create CI test to check learning

It would be useful to introduce a simple binary classification task, that could be used to test regressions in the different surrogate gradient implementations and modules.

Introduce Code Linting into CI

Currently there are no automated linting checks performed before a commit.
This has a potential impact on code quality.

  • We should introduce linting using a sensible set of linting options in pylint for example
  • In order to do so we need to fix a small number of preexisting linting violations

Publish Anaconda package

Norse is currently not published as an Anaconda package. To increase adaptation, we should allow for a simple conda install norse

C++ version of neuron primitives

Using the libtorch C++ extension api it is relatively straightforward to implement the core neuron primitives in C++. This would have the advantage of facilitating integration with alternative backends, such as neuromorphic hardware. This issue tracks progress towards this goal.

Performance tests

Add performance tests, that provides stable metrics for runtime/computational performance over releases. This allows us to test and verify new features.

Improve model documentation

To help users engage with the library, understand use cases, and apply the code we should

  • List available modules
  • Describe model characteristics / use cases
  • Provide examples of uses
    • This could be links to existing code

Exponential Integrate and Fire neuron model

This is a standard pyNN model, that should be relatively easy to implement. Reference is

Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model
as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642.

See the corresponding pyNN implementation.

Add tests for inference / training

By checking a small model trained on a simple task, it would become easier to test training and inference. It would also help test TorchScript support.

Support for torch.nn.Sequential

torch.nn.Sequential is a very convenient wrapper for creating feed forward networks by stacking layers. By implementing the integration layer by layer and lifting non-temporal operations like convolutions by applying them pointwise in time, we can reuse the torch.nn.Sequential module without any modifications.

torch.nn.Sequential([
       norse.torch.module.Lift(torch.nn.Conv2d(...)),
       norse.torch.module.LIFFeedForwardLayer(...),
       ...
])

Question: is grpcio necessary?

Pardon a silly question: what is grpcio needed for, is it essential?

The reason is that installing norse on py3.9 takes really long, as I guess they don't publish compiled wheels, probably because pypa didn't publish a manylinux container, because there's some confusion about ABI that could still possibly change...

Rename `parameters` -> `p`

We are right now overwriting the parameters() method from Torch's nn.Module. That is not good. We agreed to disregarding linting standards and revert to just calling neuron parameters p.

Automate documentation compilation and release

Currently the docs are released manually. We can save time by automating the compilation and releasing of the documentation. Either by

  • using gh-pages branch
  • using external service like GitHub actions/CI

Regularisation

There are a number of spike based regularisation schemes that we should implement. In this issue we can track the state of the art regarding regularisation in spiking neural networks.

Can the author provide an example of speech recognition?

Hello, author. Your GitHub is a very good toolkit. Thanks for the author's open source, I found that all the examples are based on image classification. Would you like to consider writing an example of voice classification?

Better input encoding / spike generation

Currently input encoding and spike generation is done ad hoc. It would be better to define, implement and integrate some common spike sources (poisson, ....). This is particularly important in order to be easily able to compare to other simulators.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.