Coder Social home page Coder Social logo

toy-neural-network-js's Introduction

Toy-Neural-Network-JS

ci

Neural Network JavaScript library for Coding Train tutorials

Examples / Demos

Here are some demos running directly in the browser:

To-Do List

  • Redo gradient descent video about
  • Delta weight formulas, connect to "mathematics of gradient" video
  • Implement gradient descent in library / with code
  • XOR coding challenge live example
  • MNIST coding challenge live example
    • redo this challenge
    • cover softmax activation, cross-entropy
    • graph cost function?
    • only use testing data
  • Support for saving / restoring network (see #50)
  • Support for different activation functions (see #45, #62)
  • Support for multiple hidden layers (see #61)
  • Support for neuro-evolution
    • play flappy bird (many players at once).
    • play pong (many game simulations at once)
    • steering sensors (a la Jabril's forrest project!)
  • Combine with ml5 / deeplearnjs

Getting Started

If you're looking for the original source code to match the videos visit this repo

Prerequisites

You need to have the following installed:

  1. Nodejs
  2. NPM
  3. Install the NodeJS dependencies via the following command:
npm install

Installing

This Project doesn't require any additional Installing steps

Documentation

  • NeuralNetwork - The neural network class
    • predict(input_array) - Returns the output of a neural network
    • train(input_array, target_array) - Trains a neural network

Running the tests

The Tests can either be checked via the automatically running CircleCI Tests or you can also run npm test on your PC after you have done the Step "Prerequisites"

Built With

  • Nodejs - The code language used
  • CircleCI - Automated Test Service
  • Jest - Testing Framework used

Contributing

Please send PullRequests. These need to pass a automated Test first and after it will get reviewed and on that review either denied or accepted.

Libraries built by the community

Here are some libraries with the same or similar functionality to this one built by the community:

Feel free to add your own libraries.

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the terms of the MIT license, see LICENSE.

toy-neural-network-js's People

Contributors

adityamhatre avatar alcadesign avatar anirudhgiri avatar arisanguinetti avatar enginefeeder101 avatar gypsydangerous avatar jackroi avatar jonasfovea avatar jonathan-richer avatar kim-marcel avatar maik1999 avatar mdatsev avatar meiamsome avatar michezio avatar mikaelsouza avatar mrdcvlsc avatar mtrnord avatar narchontis avatar notshekhar avatar papalotis avatar philaturner avatar rhbvkleef avatar savvysiddharth avatar schrummy14 avatar shiffman avatar simon-tiger avatar thekayani avatar therealyubraj avatar thomas-smyth avatar versatilus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

toy-neural-network-js's Issues

Crossover function for neuroevolution

I forked this repository and added a crossover function to nn.js and matrix.js:
https://github.com/jnsjknn/Toy-Neural-Network-JS

It works either by combining half of each neural networks' weight and bias matrix:

let child = NeuralNetwork.crossover(parentA, parentB)

or optionally by taking the new weights and biases randomly from one of the neural networks:

let child = NeuralNetwork.crossover(parentA, parentB, 'random')

I am quite new to Github. Can I just make a pull request?

READMEs

Need a README for each example as well as linking to all the examples and better documentation in the main README.

Multilayer Perceptron

Currently the Neural Network in the library has only one hidden layer.
There is need to update neural network model so that it can have multiple layers like other libraries synaptic and brain.js

Implementation with cpp

Hi, I watch your playlist it is awesome.
And i made an implementation of matrix and nn classes with cpp
still working on read / write files and multi layers but every thing else works.

Here is the link to the repository NeuralNetwork-CPP

Adjust at the end

Hi,

I don't know if it was already said, but it is still not fixed in the code.
When you are training your model, you calculate the weights_deltas between hidden layer and output layer.
But you are updating your weights too early, you are using the new weights to calculate deltas for previous weights (for back-propagation).
You have to keep deltas in memory then update at the end after the rest.

That's why your model takes a lot of time to be trained for the XOR problem. It shouldn't be that long. By memory you have took 50000 iterations. That is too big for a problem like this.

Keep going ^^.

P.S.: Sorry if my english is not perfect, it is not my main natural language.

Custom test without node using pure javascript

Hey, I just tried to create my own test function which kind of similar to the Jest library.
Have been testing for 5 statements below and yet to find any failure.
But for now, it only have 2 functional tests which is expect(value).toBe(other) and expect(value).toEqual(something).

Let me see what do you guys think about this kind of stuff. Is it worth it?

Below is the implementation of the tests

function check(current,other){
	if(typeof other !== 'object' && typeof current !== 'object') return current === other;
	let equal = false;
	for(let prop in other){
		if(current[prop] === undefined) throw new Error("FAILED");
		equal = equal || check(current[prop],other[prop]);
	}
	if(!equal) throw new Error("FAILED");
	return equal;
}


function test(comment,callback){
	let error;
	try{
		//execute the tests;
		callback();
	}catch(e){
		// Get the result whether it passed or failed 
		error = e.message;
	}
	console.log(comment,error === "FAILED" ? "FAILED" : "PASSED");
}

function expect(value){
	this.toBe = function(other){
		if(value !== other)	throw new Error("FAILED");
	}
	this.toEqual = function(something){
		check(value,something);
	}
	return this;
}

// The tests start HERE

test('Is 2+2 be 4?',() => {
	expect(2 + 2).toBe(4);						// returns : Is 2+2 be 4? PASSED
});

test('Is 2+3 be 4?',() => {
	expect(2 + 3).toBe(4);						// returns : Is 2+3 be 4? FAILED
});

test('Is {value:3*3} equal {value:9}?',() => {
	expect({value:3*3}).toEqual({value:9});        			// returns : Is {value:3*3} equal {value:9}? PASSED
});

test('Is {value:2*3} equal {value:9}?',() => {
	expect({value:2*3}).toEqual({value:9});       			// returns : Is {value:2*3} equal {value:9}? FAILED
});

test('Is {value:{arr:[6]}} equal {value:9}?',() => {
	expect({value:{arr:[6]}}).toEqual({value:9});    		// Is {value:{arr:[6]}} equal {value:9}? FAILED
});

Uncaught SyntaxError: missing ) after argument list mnist.js line 9

I found the mnist example to be broken with the following errors:

Uncaught SyntaxError: missing ) after argument list mnist.js:9
...
Uncaught ReferenceError: loadMNIST is not defined sketch.js:33

Anyone know what is going on? I don't know enough about promises to debug.

BTW - doodle_classifier worked for me as expected

Unexpected behavior on adding multiple hidden layers

I've modified the NeuralNetwork code so the class constructor receives the number of hidden layers and the number of nodes per hidden layer, also modifying the guess and training functions to work with multiple hidden layers. It seems to work but, in the XOR example, as I create NNs with increasing number of hidden layers, the results come back wrong more and more frequently, to the point of at 5 hidden layers I always get wrong answers. Is it something I should expect or maybe there's something wrong with my implementation? My code: https://pastebin.com/iF8UVhSU

Training does not reliably converge, even with the simple XOR problem

I am not sure that this is a known issue, but I cannot get the neural network to converge reliably.

In many instances, it just cannot solve the simple XOR problem.

For instance, I had serialized one of the randomly generated NeuralNetworks that displays this problem:

Is this just an issue with the activation function? Then it would be helpful to explain this in the README.
Or is there a bug in the code?

import {NeuralNetwork} from "./nn/nn";

let nn;

const training_data = [
    {
        inputs: [0, 0],
        outputs: [0],
    },
    {
        inputs: [0, 1],
        outputs: [1],
    },
    {
        inputs: [1, 0],
        outputs: [1],
    },
    {
        inputs: [1, 1],
        outputs: [0],
    },
]

function setup() {
    createCanvas(400, 400);

    nn = NeuralNetwork.deserialize({"input_nodes":2,"hidden_nodes":2,"output_nodes":1,"weights_ih":{"rows":2,"cols":2,"data":[[-0.12692590266986858,-0.844955757436316],[-0.9357427469178123,0.8173651578783794]]},"weights_ho":{"rows":1,"cols":2,"data":[[-0.5832662974097391,0.5308947844782579]]},"bias_h":{"rows":2,"cols":1,"data":[[0.39650732687505963],[-0.49808473788143637]]},"bias_o":{"rows":1,"cols":1,"data":[[0.2908941132572971]]},"averageError":0,"learning_rate":0.01,"activation_function":{}});
    global.nn = nn;
}

function draw() {
    background(0);

    for (let i = 0; i < 1000; i++) {
        let data = random(training_data);
        nn.train(data.inputs, data.outputs);
    }

}

global.setup = setup;
global.draw = draw;
global.NeuralNetwork = NeuralNetwork;

The XOR example at https://codingtrain.github.io/Toy-Neural-Network-JS/examples/xor/ also faces this issue ; when refreshing a few times, it will result in the wrong separation visualization.

Dropout Feature

Hello everyone! I really like this library, and the Coding Train vídeos really opened my mind to all the possibilities of ANN.

I was reading about ways to avoid overfitting and Dropout comes up a lot. Did anyone successfully implement dropout to the library? If not, could you please provide any clues on where to start?

Thank you very much :)

Numbers recognition

Hi !
I've tried to implement the same neural network library as in the videos
However I have some problems

I am coding in Python, in Processing. Just to warn you, I am not really familiar with Python and Processing, that's why i have everything in one file ( I wasn't able to separate it in defferent files and then import them , if you know how to do it please tell me)

So the problem i am getting, is that the neural network is getting stuck. In fact I rarely go over 85% accuracy with is quite bad I think. And it doesn't matter how long i let it run, arround 85% , the accuracy just randomly changes. I know that my code is not really clear and messy , but could someone try to find what's going on ? Just so you know, I've stored, all the images in 60 000 files (1 per image ) , each containing a number between 0 and 1 for the pixel color

I've also added the possibility to have several hidden layers in the neural network
However, iam not sure if I am doing the backpropagation and the gradient descent correctly, could anyone check and tell me ?

Thanks a lot, I am new to GitHub so sorry if I did things wrong
( and by the way, I am French, that's why my english might be bad )

Java Neural Network Library

Hi, I’ve build my own version of this neural network library in Java. It has the same functionality as this one but I’ve added some extra features like saving/ reading the neural network to/ from JSON-file and support for multiple hidden layers. The library can be downloaded as a jar and used in Java or Processing projects.

These are two examples that use my library:

Besides me other people are developing their own neural network libraries in different languages (see #91). I think it would be useful if we could collect and reference to them e.g. in the readme. This will make it easier for people to find suitable libraries to follow along @shiffman in his tutorials/ coding challenges in their preferred language.

Doodle classification visualization

Hello,

I've created simple visualization for doodle classification example. It shows every doodle with coresponding prediction from NN.

Sample screenshot:

doodle

white background = NN was correct
red background = guessed "cat"
green background = guessed "rainbow"
blue background = guessed "train"

In first table in console each row is number of mistakes for every pair label:classification.

Second table is raw list of wrong guesses in format: (index), doodle number, label, classification, guess "cat", guess "rainbow", guess "train".

In overall, looks like NN is pretty consistent with rainbow doodles, but not so much with cats and trains.

Cannot read property 'top' of null

When i was following along the video i got the following error:

Uncaught TypeError: Cannot read property 'top' of null Bird.js:44

the same error as in the video but when he fixed it and i wrote the same as him i still got the error. does somebody knows how to fix this issue?

This is the code:`

let closest = null;
let closestD = Infinity;
for (let i = 0; i < pipes.length; i++) {
  let d = pipes[i].x - this.x;
  if(d < closestD && d > 0) {
    closest = pipes[i];
    closestD = d;
  }
}


let inputs = [];
inputs[0] = this.y / height;
inputs[1] = closest.top / height;
inputs[2] = closest.bottom / height;
inputs[3] = closest.x / width;
let output = this.brain.predict(inputs);
if(output[0] > 0.5) {
  this.up();
}

`

Python Neural Network Library

Hello people. Some time ago, when shiffman announced that he would start developing a library to create neural networks, I decided to start alone.
In my case Python was used, the initial idea of ​​my project is the creation of neural network from layers with specific characteristics. Different types of layers can be used depending on the problem. At the moment, I have implemented only layers of simple neurons and would like to share my progress.

Here is the link to my repository.
https://github.com/Gabriel-Teston/Machine-Learning

Gradient Optimization

Since the derivative of the cost function is actually 2 * error, training speed can be increased by doubling the learningRate or multiplying the gradients by two.

Building process for more structure

The neural network files should be more organized.
The community contributes really great features, but it gets a little bit messy in my opinion inserting everything basically into one file.

good example would be #78

I dont think mutate is working

Is it really mutating?
Im making a copy of the bird and new bird has same weights. The copy is suppose to get mutated as well but I dont think it does. But then how does the birds improve?

Bias variable names are a little confusing

The bias nodes exist only in pre-output layers, but for example, the weight connected the the hidden bias neuron is called bias_o. This confused me a lot, so I think it is easier to understand bias_o as bias_h and bias_h as bias_i. This way I think it's easier to read the code that updates the biases, since the gradient from the output layer, for example, is mapped to the bias in the previous layer, as the process of backpropagation might suggest by its name.

Mutate function not working

When I call the mutate function, I get an undefined error. When I log the neural network, I get this object, stringified:

{"input_nodes":6,"hidden_nodes":16,"output_nodes":2,"weights_ih":{"rows":16,"cols":6,"data":[[-0.7588681817561556,-0.0015736164027826405,-0.7743441465419845,0.5735821968139647,0.7303167179333263,0.7536888325147988],[-0.7517714708431349,0.7002838641328841,0.5925134069842137,0.43114275027161675,0.9624881626211272,-0.09895806160282827],[-0.6589455178098218,-0.696096110441593,-0.3055149031960056,-0.024785170248342148,-0.8490996275624898,-0.5484065157264433],[-0.6310071892596323,0.26198257989207185,0.8362451249845932,0.4293335198226558,-0.7134868294454368,0.28861286889872595],[0.41637566287502237,0.17554725166346286,0.9776465636395706,-0.11094571247042762,-0.4410035141828552,0.8003457306481243],[0.0559450013271392,-0.20289033386389832,-0.9639277145577907,0.401103247877566,-0.9286951288454572,0.7427159382756345],[0.9070155968745262,-0.9364669144680655,-0.7560906229307247,0.48867174219359066,-0.42845968213807284,0.708650914339163],[0.545914487016474,-0.3452433257623624,0.617640428365037,-0.9789470504022963,-0.385002645862488,-0.1238948488074465],[-0.33333250302978534,0.8011260001261697,-0.8894792831773897,-0.304032759451089,-0.2942273171523766,-0.2575556333171858],[0.07493841291953318,-0.35217194950133957,-0.6666732072364074,0.833799198044272,0.4792160923436475,0.057940675531410246],[-0.43416678440635037,-0.11750618656444756,0.3138631606707061,0.8938704100400638,0.9597614528283631,0.6278532074456256],[-0.832164703335073,0.8992951958382109,-0.13309989725198035,0.4117830207975044,0.8118711985575024,-0.9930072620702992],[-0.36974983544839324,0.6487337398799555,-0.30299945691158037,-0.6500380850651708,-0.3129680183779202,-0.048327989224226986],[0.8070929383947547,0.4317168631718773,0.9118896155563778,-0.24689749188593524,-0.9308839764160801,-0.9198811980884236],[0.6178231884479319,-0.9647702553637196,-0.9510743115652738,0.8400054405852297,-0.5766064338654808,-0.5299711268882832],[-0.08816621615077569,-0.2902987264159558,-0.9777235663990531,-0.3317243272380863,-0.09677818226102897,0.6356317230362021]]},"weights_ho":{"rows":2,"cols":16,"data":[[0.6727526542996372,-0.641583001128716,-0.1505842332276397,0.451071824712562,0.9631220474516455,-0.20181939062522458,-0.32326463850678966,-0.44864268006435015,-0.2665950261562857,-0.7198283800126162,0.4588827650017673,0.6073685162616083,0.6187739054701211,-0.6357582982407926,0.4155009853258034,-0.2230334822016542],[-0.4245468002837849,0.49214186452740627,0.1138953958285831,-0.7946429903817247,0.3175606435671261,-0.21391910904903266,0.23745509594259806,-0.4054028550189237,-0.7598945030430335,0.3555503685605115,-0.10151091555982772,0.19047065201160684,-0.958867043242063,-0.47887629905366147,-0.9891715537045958,-0.5602175019475935]]},"bias_h":{"rows":16,"cols":1,"data":[[0.9488474625857628],[0.134590428626157],[0.7481822561245233],[-0.9782939349056812],[-0.2117200516104556],[0.33784094797994646],[0.7674850477109887],[-0.14641101265565082],[-0.7294726029199898],[-0.3105109740249459],[-0.27102778258669913],[-0.44505190484525015],[0.623598487846885],[-0.8533350107134341],[0.871087887264808],[0.12086478580130455]]},"bias_o":{"rows":2,"cols":1,"data":[[0.42492795508612424],[0.6108978126849949]]},"learning_rate":0.1,"activation_function":{}}

example for console

I am sorry for my realy n00b request.
Is it possible to get a purly console example for the mnist example.
In fact I want to run something like "node testmnist.js" I know this is stupit but I do not understrand how all the libs have to get loaded and called from a centered prog in order to test the inputs.

Thank for your help

More tests!

Neural Network class needs tests?

  • test feed forward?
  • test backprop?

Neural Network Feature (Wish) List

Neural Network Feature (Wish) List

These are the basics:

  • Basic 2-layer network with bias
  • Activation functions which reuse the activation on the backwards pass (sigmoid, tanh, ReLU)
  • MSE cost function
  • Adjustable learning rate
  • Additional examples and tests
    See #41 and #66 and #76.

These would be interesting to add:

  • Multiple hidden layers
    Per #61.
  • Semi-arbitrary activation functions
    Per #70 and #75.
  • Arbitrary cost functions
    See here.
  • Automatically adapting learning rate (Momentum)
    Per #65. Also see here via here.
  • Multiple initial weighting strategies
    See here.
  • Convolution layers
    See here. Also see here via here.
  • Simple RNN
    See here via here.
  • Advanced optimization
    See here.

C# version/port of nn.js & matrix.js

Hello, I'm just a small fan from Czech Republic and I made a C# version of your library https://github.com/ItsMates/Toy-Neural-Network-CSharp Dan! Btw. I have never used a github before, so I hope I can write this message to the Issues.. I am still learning everything, (mainly English) please tell me if I can't write here.

I love the work you do. Even that I have never coded a js or p5 script ever :D Now I am working on an evolution project in C# based on your tutorials and I thought somebody will find my ugly code useful :)

Have a fantastic day!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.