ml-lab / batchwise-dropout Goto Github PK
View Code? Open in Web Editor NEWThis project forked from btgraham/batchwise-dropout
Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.