Training a single layer perceptron model on sparse data.
- Lab 1 - just simple batch gradient descent.
- Lab 2 - Hogwild!, a multithreaded gradient descent algorithm without locks.
- Lab 3 - modified Stochatic Variance-reduced gradient descent algorithm, with Hogwild! updates.
Lab dataset is the w8a dataset.
I tried implementing a neuroevolution algorithm for updating model parameters. It's insignificant in my code, since the model isn't deep enough to have a complicate error gradient, but neuroevolution does run about twice as fast as gradient descent.
However, gradient descent can achieve >90% accuracy, whereas this neuroevolution algorithm achieves ~60%, even after 10 times more training. Supposedly, evolutionary strategies are better suited for reinforcement learning settings, than supervised learning. That said, a safe mutation algorithm might still have interesting results (todo).
Neuroevolution.cpp is not multithreaded, like the other files in this repo, since I just wanted to quickly see if it would work.