Coder Social home page Coder Social logo

jaewan-yun / neural-network-theories Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 0.0 3.44 MB

A summary and Python TensorFlow implementation of learning theories described in "Learning Internal Representations by Error Propagation" by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams (1985).

License: MIT License

neural-network-theories's Introduction

Neural Network Theories

A summary and Python TensorFlow implementation of theories described in Learning Internal Representations by Error Propagation by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.


A Brief HIstory

In their book, Perceptrons (1969), Marvin Minsky and Seymour Papert state:

The perceptron has shown itself worthy of study despite (and even because of!) its severe limitations. It has many features that attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgement that the extension is sterile. Perhaps some powerful convergence theorem will be discovered, or some profound reason for the failure to produce an interesting "learning theorem" for the multilayered machine will be found. (pp. 231-232)

Such pessimistic view presented by Minsky and Papert (1969) arguably brought about the so-called AI winter which lasted throughout the following decade.

In this report published in 1985, Rumelhart, Hinton, and Williams conclude:

Although our learning results do not guarantee that we can find a solution for all solvable problems, our analyses and results have shown that as a practical matter, the error propagation scheme leads to solutions in virtually every case.

Through persistence in the shadow of doubt, the trio of academics answered Minsky and Papert's challenge, and ignited a renewed interest in deep learning. Today, these very models are ubiquitous. From computer processor optimization (e.g. AMD SenseMI) to impeccable text-to-speech synthesis (e.g. Google Tacotron, Google Duplex), neural networks are now the face of artificial intelligence.


Table of Contents


Required Python Libraries

matplotlib
numpy
tensorflow

neural-network-theories's People

Contributors

j-w-yun avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.