Coder Social home page Coder Social logo

nnet's Introduction

Multi-layer perceptrons using RPROP

nn is a lightweight neural network library using resilient propagation for adapting the weights

Installation

nn was tested on Ubuntu, Arch Linux and MacOS

  • install CMake, Eigen3 and subversion. On Ubuntu this is done as follows:

      sudo apt-get install cmake subversion libeigen3-dev
    
  • clone the nn repository or download it here

  • change to the nn directory and create a build folder

      cd path/to/nn
      mkdir build
    
  • run cmake from within the build folder and compile the library using make

      cd build
      cmake ..
      make
    
  • run the example code

      ./tutorial
    
  • to compile unit tests for nn, run cmake with the option -DWITH_GTEST=ON

      cmake .. -DWITH_GTEST=ON
      make
      make test
    

License

nn is free software, licensed under the BSD license. A copy of this license is distributed with the software.

Usage of the library

The source code for this tutorial can be found in tutorial.cpp.

Preparing your data

Organize your training data into a (m x n_input) matrix containing the training inputs. Each row of this matrix corresponds to a training sample and each column to a feature. Prepare a matrix of size (m x n_output) containing the target values, where n_output is the number of dimensions of the output.

matrix_t X(m, n_input);
matrix_t Y(m, n_output);

// fill with data

Initializing the neural network

This neural network implementation only supports fully connected feed forward multi-layer perceptrons (MLPs) with sigmoidal activation functions. The neurons are organized into k layers. There is at least one input layer, one output layer and an arbitrary number of hidden layers. Each neuron has outgoing connections to all neurons in the subsequent layer. The number of neurons in the input and the output layer is given by the dimensionality of the training data. After specifying the network topology you can create the NeuralNet object. The weights will be initialized randomly.

Eigen::VectorXi topo(k);
topo << n_input, n1, n2, ..., n_output;

// initialize a neural network with given topology
NeuralNet nn(topo);

Scaling the data

When working with MLPs you should always scale your data, such that all the features are in the same range and the output values are between 0 and 1. You can do this by passing your training data to the autoscale function, which computes the optimal mapping. After calling autoscale this mapping will be performed automatically, so you only have to do this once. To reset the scaling parameters to standard values call autoscale_reset.

nn.autoscale(X,Y);

Training the network

Alternate between computing the quadratic loss of the MLP and adapting the parameters until the loss converges. You can also specify a regularization parameter lambda, which punishes large weights and thereby avoids overfitting.

for (int i = 0; i < max_steps; ++i) {
    err = nn.loss(X, Y, lambda);
    nn.rprop();
}

Making predictions

If you trained a model, you can make predictions on new data by passing it through the network and observe the activation on the output layer.

nn.forward_pass(X_test);
matrix_t Y_test = nn.get_activation();

Reading and writing models to disk

You can read and write MLPs to binary files.

// write model to disk
nn.write(filename);
    
// read model from disk
NeuralNet nn(filename);

Changing the floating number precision

nn uses double precision floats by default. You can change this behaviour in the file nn.h.

#define F_TYPE double

MNIST dataset

In order to test nn on the MNIST dataset, download the dataset from here and run the mnist tool.

./mnist path/to/data

The tool will train a MLP with two hidden layers, containing 300 and 100 neurons respectively connected by 266.610 weights. Using this setup error rates below 5% are accomplished on the test dataset.

Make nn run in parallel

Some algorithms of the Eigen library can exploit the multiple cores present in your hardware. This will happen automatically, if your compiler supports it. You can control the number of threads that will be used using by setting the OpenMP OMP_NUM_THREADS environment variable.

OMP_NUM_THREADS=n ./my_program

Using nn in your own project

Just copy nn.h and nn.cpp into your workspace, make sure that the Eigen headers are found and start coding!

nnet's People

Contributors

hsidky avatar mblum avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

Forkers

sesevgen

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.