Coder Social home page Coder Social logo

tinn's Introduction

Tinn (Tiny Neural Network) is a 200 line dependency free neural network library written in C99.

For a demo on how to learn hand written digits, get some training data:

wget http://archive.ics.uci.edu/ml/machine-learning-databases/semeion/semeion.data

make; ./test

The training data consists of hand written digits written both slowly and quickly. Each line in the data set corresponds to one handwritten digit. Each digit is 16x16 pixels in size giving 256 inputs to the neural network.

At the end of the line 10 digits signify the hand written digit:

0: 1 0 0 0 0 0 0 0 0 0
1: 0 1 0 0 0 0 0 0 0 0
2: 0 0 1 0 0 0 0 0 0 0
3: 0 0 0 1 0 0 0 0 0 0
4: 0 0 0 0 1 0 0 0 0 0
...
9: 0 0 0 0 0 0 0 0 0 1

This gives 10 outputs to the neural network. The test program will output the accuracy for each digit. Expect above 99% accuracy for the correct digit, and less that 0.1% accuracy for the other digits.

Features

  • Portable - Runs where a C99 or C++98 compiler is present.

  • Sigmoidal activation.

  • One hidden layer.

Tips

  • Tinn will never use more than the C standard library.

  • Tinn is great for embedded systems. Train a model on your powerful desktop and load it onto a microcontroller and use the analog to digital converter to predict real time events.

  • The Tinn source code will always be less than 200 lines. Functions externed in the Tinn header are protected with the xt namespace standing for externed tinn.

  • Tinn can easily be multi-threaded with a bit of ingenuity but the master branch will remain single threaded to aid development for embedded systems.

  • Tinn does not seed the random number generator. Do not forget to do so yourself.

  • Always shuffle your input data. Shuffle again after every training iteration.

  • Get greater training accuracy by annealing your learning rate. For instance, multiply your learning rate by 0.99 every training iteration. This will zero in on a good learning minima.

Disclaimer

Tinn is a practice in minimalism.

Tinn is not a fully featured neural network C library like Kann, or Genann:

https://github.com/attractivechaos/kann

https://github.com/codeplea/genann

Ports

Rust: https://github.com/dvdplm/rustinn

Other

A Tutorial using Tinn NN and CTypes

Tiny Neural Network Library in 200 Lines of Code

tinn's People

Contributors

glouw avatar nicolas-sauzede avatar saschagrunert avatar timgates42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tinn's Issues

Biases never updated

The biases are never updated in the backprop algorithm as it currently stands.
Biases can be included in the array of weights which would simplify things a bit (or keep treating separately but need to update them). Then, each input to the network will need to have a 1 as an extra, last, value. First we can update the code like so:

diff --git a/Tinn.c b/Tinn.c
index 74eb573..9a6fe8b 100644
--- a/Tinn.c
+++ b/Tinn.c
@@ -74,7 +74,7 @@ static void fprop(const Tinn t, const float* const in)
         float sum = 0.0f;
         for(int j = 0; j < t.nips; j++)
             sum += in[j] * t.w[i * t.nips + j];
-        t.h[i] = act(sum + t.b[0]);
+        t.h[i] = act(sum);
     }
     // Calculate output layer neuron values.
     for(int i = 0; i < t.nops; i++)
@@ -82,11 +82,11 @@ static void fprop(const Tinn t, const float* const in)
         float sum = 0.0f;
         for(int j = 0; j < t.nhid; j++)
             sum += t.h[j] * t.x[i * t.nhid + j];
-        t.o[i] = act(sum + t.b[1]);
+        t.o[i] = act(sum);
     }
 }
 
-// Randomizes tinn weights and biases.
+// Randomizes tinn weights.
 static void wbrand(const Tinn t)
 {
     for(int i = 0; i < t.nw; i++) t.w[i] = frand() - 0.5f;
@@ -113,8 +113,7 @@ Tinn xtbuild(const int nips, const int nhid, const int nops)
 {
     Tinn t;
     // Tinn only supports one hidden layer so there are two biases.
-    t.nb = 2;
-    t.nw = nhid * (nips + nops);
+    t.nw = nhid * (nips + nops + 2);
     t.w = (float*) calloc(t.nw, sizeof(*t.w));
     t.x = t.w + nhid * nips;
     t.b = (float*) calloc(t.nb, sizeof(*t.b));

Then the training part needs to be adjusted so that each input row contains an extra column with 1.

Example (for xor training - remember, these are the inputs now):

$$ \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \end{pmatrix} $$

The last column will ensure the bias weight is always included in forward propagation. And backprop will update all weights as usual, including the biases.

Suggestion: Remove call to powf for square

Hi! Just a quick suggestion regarding performance when calling err

tinn/Tinn.c

Line 11 in e92adf3

return 0.5f * powf(a - b, 2.0f);

Instead of calling powf from the standard library, it would be nicer to just call an inline square() function, or even better, use a macro (I couldn't convince visual studio to actually remove the call instruction otherwise).

For example a call to powf on my machine results in a call to _libm_sse2_pow_precise which has more than 50 instructions until it returns, when what you really want is just one single mul instruction.

I realize that it might not really be that big of a deal, because calls to err only scale with output and iteration size, but making the change is super trivial.

I did a quick test anyway on my i7-7700K with Visual Studio 2017 and /O2 optimization enabled. Using a simple SQUARE(x) macro resulted in a 2.6% speed improvement per call to xttrain, while using a call to a square(x) function resulted in a 1.8% speed improvement. This will only get better as the number of iterations and ouputs grow.

Dotnet support

Hi,

Great little library you have here! ๐Ÿ‘

I wanted to use this in dotnet, but couldn't find any implementations, so I ported this myself under tinn-dotnet.

What's in it:

  • All features from C ported to dotnet.
  • Two-way compatibility with .tinn files.
  • There is an example with MNIST database.
  • There are automated tests for saving/loading and training a simple XOR network.

Hopefully some one will find this useful as well. ๐Ÿ™‚

Also, I raised a PR #27 to add a link to this port in readme port section.

Have a great day!

output functions

Is posssible to create a autput function in C?
For example I learn my network and need embeding this function to my program in C
is possible to save in file a C function (pieces of program) with all variables, bies etc.
only one big function

Weird output of xttrain()(1.#INF00)

Hi, I'm trying to write a small program to train a network on the iris dataset. (https://archive.ics.uci.edu/ml/datasets/iris). When training, xttrain returns values that are printed as 1.#INF00. I don't know whether the fault is on the library's side or my fault. Another weird thing is that when I run the program in debug-mode, it works just fine. Here is the repository with my code: https://github.com/derlozi/TINN-Iris-Dataset I hope that you can help me.
Thanks in advance!

xor and acess data

Hello,

Is possible to create example how changing weights?
For example I learn xor
How changing randomly some neurons. How acess values etc.

Example xor will be help

Import bias and weight from Keras

Hello,

I am trying to train a network using keras and importing his weight and bias inside Tinn to be able to benefit from the keras environment.

  1. I create a network using tinn and I train it
  2. I build a network that should be the same in Keras than with Tinn
  3. Then a train it inside keras and display the weights and bias
  4. I put the weights + bias inside a Tinn model and load them

But the network is unsuccessfully predicting the output

Here my code I used in all my Keras layer a Sigmoid activation function which should lead to the same result KerasTinn.zip But I get two bias for every layers from Keras and none of them works correctly. I am neither sure the order of the weights is correct.

Any idea how to solve this?

Wrong results

Hi,

Thank you for your library I was looking for this from long time !

Anyway I do not succeed using your example on number recognition, I get wrong results and can't figure out why :
1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
0.624335 0.837375 0.552968 0.280362 0.601309 0.397255 0.634169 0.425480 0.439418 0.132246

Julien

Questions about bias

Hi!
Great library!
But have questions like:
t.h[i] = act(sum + t.b[0]);
t.o[i] = act(sum + t.b[1]);
Why only have one fixed bias for all the nets?
Maybe there's some misunderstanding.
Looking forward to your reply.

Rust bindings

HI, I was looking for a C project I could use to learn how to do FFI bindings for rust and I came across your project, with it's small and simple API (simple API > complicated API) would be perfect.

Would you be happy to let me create a rust wrapper for you're library?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.