Coder Social home page Coder Social logo

dltemplate's Introduction

Boilerplate for Deep Learning Projects

Model Templates

  1. Multi-layer Perceptron - MNIST (Homemade framework)
  2. CNN from scratch (Homemade framework)
  3. Logistic Regression - MNIST (TensorFlow)
  4. Simple Multi-layer Perceptron - MNIST (TensorFlow)
  5. Enhanced Multi-layer Perceptron using Batch Normalization - MNIST (TensorFlow)
  6. Enhanced Multi-layer Perceptron using TensorFlow Estimator API - MNIST
  7. Simple CNN - MNIST (TensorFlow)
  8. Enhanced CNN - Image Classifier (Keras)
  9. Image classifier (Keras)
  10. Autoencoder - Denoising images, Facial Recognition, Face Generation (Keras)
  11. RNN - Name Generator (Keras)
  12. Part of speech (POS) tagging using an RNN (Keras)
  13. Image Captioning (Keras)
  14. Image Classifier using ResNet and Fast.ai (PyTorch)
  15. Deep Q Network (Keras)
  16. Generative Adversarial Network (GAN) (Keras)
  17. Predicting StackOverflow Tags using Classical NLP
  18. CNN using Sonnet - Signs dataset (DeepMind Sonnet)
  19. Recognize named entities on Twitter using a Bidirectional LSTM (TensorFlow)
  20. Recognize named entities on Twitter using CRF (sklearn-crfsuite)
  21. Recognize named entities on Twitter using Bi-LSTM + CRF (TensorFlow)
  22. Detect Duplicate Questions on StackOverflow using Embeddings
  23. Building a Simple Calculator using a Sequence-to-Sequence Model (TensorFlow)
  24. Reinforcement Learning using crossentropy method
  25. Reinforcement Learning using a neural net (sklearn)
  26. Navigate a Frozen Lake using a Markov Decision Process (MDP)
  27. A Sequence-to-Sequence Chatbot (TensorFlow)
  28. Solve the Taxi Challenge using Q-Learning
  29. Training a Deep Q-Learning Network to play Atari Breakout (Keras)
  30. Playing CartPole using REINFORCE (Keras)
  31. Playing Kung Fu Master using Advantage Actor Critic (AAC) (Keras)
  32. Playing CartPole using Monte Carlo Tree Search
  33. Translating Hebrew to English using RL for Seq2Seq Models (TensorFlow)
  34. Bernoulli Bandits - Survey of Model-free RL Algorithms
  35. Q-Table Learning Agent
  36. Multi-armed Bandit (TensorFlow)
  37. Contextual Bandits (TensorFlow)
  38. Vanilla Policy Gradient Agent (TensorFlow)
  39. Model-based example for RL (TensorFlow)
  40. Deep Q-Network (TensorFlow)
  41. Deep Recurrent Q-Network (TensorFlow)
  42. Asynchronous Actor-Critic Agents (A3C) (TensorFlow)
  43. Wake-word Detection (Keras)
  44. Neural Turing Machine (TensorFlow)
  45. DiscoGAN - Learning to Discover Cross-Domain Relations with Generative Adversarial Networks (PyTorch)
  46. Pointer Generator Network for Text Summarization (TensorFlow)
  47. Minimizing network delay using Deep Deterministic Policy Gradients (DDPG) (Keras)
  48. RL from scratch - Using Policy Gradients to play Pong
  49. Multi-class Text Classification using a CNN and RNN (TensorFlow)
  50. Multi-class Text Classification using fastText (fastText)
  51. Multi-class Text Classification using Fastai (Fastai / PyTorch)
  52. Multi-class Text Classification using Logistic Regression (sklearn)
  53. Multi-class Text Classification using Multinomial Naive Bayes (sklearn)
  54. Multi-class Text Classification using NBSVM (SVM with Naive Bayes Features) (sklearn)
  55. Multi-class Text Classification using BiLSTM (TensorFlow)
  56. Multi-class Text Classification using Word-level CNN (TensorFlow)
  57. Multi-class Text Classification using Word-level CNN initialized with Word2Vec Embeddings (TensorFlow)
  58. Multi-class Text Classification using Character-level CNN (Keras)
  59. Multi-class Text Classification using a Transformer Model (TensorFlow)
  60. Multi-class Text Classification using BERT - Transfer Learning using Deep Bidirectional Transformers (TensorFlow)
  61. Siamese CNN for document matching (Keras)
  62. Text Generation via Adversarial Training (TensorFlow)
  63. Question Detector using Word CNN (TensorFlow)
  64. Decomposable Attention to identify question pairs that have the same intent (Keras)
  65. LightGBM model to identify question pairs (sklearn / LightGBM)
  66. lda2vec to mix the best parts of word2vec and LDA (TensorFlow)
  67. Summarization using LSTM (TensorFlow)

Special Topics

  1. Reinforcement Learning -- Survey of Methods
  2. Natural Language Processing
  3. What do you do with...
  4. Exploring state-of-the-art in text classification

Demonstrates

  1. Basic principles of a neural net framework with methods for forward and backward steps
  2. Basic principles of convolutional neural network
  3. Basics of TensorFlow
  4. Basic setup for a deep network
  5. More complex network using batch normalization
  6. Training with the TensorFlow Estimator API
  7. Basic principles of a convolutional neural network
  8. CNN using Keras
  9. Fine-tuning InceptionV3 for image classification
  10. Autoencoders
  11. Basic principles of a recurrent neural network for character-level text generation
  12. Using an RNN for POS tagging, using the high-level Keras API for building an RNN, creating a bidirectional RNN
  13. Combining a CNN (encoder) and RNN (decoder) to caption images
  14. A higher level framework (3 lines of code for an image classifier)
  15. Deep Reinforcement Learning using CartPole environment in the OpenAI Gym
  16. Basic principles of a GAN to generate doodle images trained on the 'Quick, Draw!' dataset.
  17. Exploring classical NLP techniques for multi-label classification.
  18. Basic usage of Sonnet to organize a TensorFlow model
  19. Basic principles of a Bidirectional LSTM for named entity recognition
  20. Basic principles of Conditional Random Fields (CRF) and comparison with Bi-LSTM on the same task
  21. Combining a Bi-LSTM with CRF to get learned features + constraints
  22. Use of embeddings at a sentence level, testing StarSpace from Facebook Research.
  23. Solving sequence-to-sequence prediction tasks.
  24. Basic principles of reinforcement learning
  25. Approximating crossentropy with neural nets in an RL model
  26. Using a Markov Decision Process to solve an RL problem.
  27. Building a chatbot using a sequence-to-sequence model approach.
  28. Basic principles of Q-Learning
  29. Tips and tricks to train a Deep Q-Learning Network - Frame Buffer, Experience Replay
  30. Basic principles of using the REINFORCE algorithm
  31. Basic principles of using the Advantage Actor Critic (AAC) algorithm
  32. Introduction to Planning Algorithms using Monte Carlo Tree Search.
  33. Reinforcement learning for sequence-to-sequence models.
  34. Survey of Model-free RL algorithms - Epsilon-greedy, UCB1, and Thompson Sampling.
  35. Introduction to Q-Table Learning.
  36. Building a simple policy-gradient based agent that can solve the multi-armed bandit problem.
  37. Building a simple policy-gradient based agent where the environment has state, but state is not determined by the previous state or action.
  38. Introduction to Policy Gradient methods in RL.
  39. Introduction to model-based RL networks.
  40. Implement a Deep Q-Network using Experience Replay.
  41. Implement a Deep Recurrent Q-Network to handle Partially Observable Markov Decision Processes (POMDPs).
  42. Introduction to Asynchronous Actor-Critic Networks based on DeepMind Paper.
  43. Processing audio using an RNN to detect wake-words.
  44. Introduction to Neural Turing Machines.
  45. Using a GAN to transfer style from one domain to another while preserving key attributes such as orientation and face identity.
  46. Basic principles of Pointer Generator Networks.
  47. Using Reinforcement Learning to optimize a Software Defined Network (SDN).
  48. Introduction to Policy Gradients.
  49. Experiments in finding best-in-class short-text classifier.
  50. fastText (Facebook Research) performance in text classification tasks.
  51. Using Transfer Learning in NLP to achieve state-of-the-art performance in text classification.
  52. Baseline model for Multi-class Text Classification.

Datasets

  1. MNIST - handwritten digits (Keras)
  2. CIFAR-10 - labelled images with 10 classes
  3. Flowers classification dataset
  4. LFW (Labeled Faces in the Wild) - photographs of faces from the web
  5. Names - list of human names
  6. Captioned Images
  7. Tagged sentences from the NLTK Brown Corpus
  8. Quick, Draw! dataset
  9. StackOverflow posts and corresponding tags
  10. Sign language - numbers 0 - 5
  11. Tweets tagged with named entities
  12. Duplicate questions set, with positive and negative examples, from StackOverflow
  13. Cornell movie dialog corpus.
  14. Open Subtitles movie dialog corpus.
  15. Hebrew to English words.
  16. Pix2pix datasets.
  17. San Francisco Crime Classification (for text/intent classification).
  18. Large Movie Review Dataset (for text/intent classification).

Notation

  • Superscript [l] denotes an object of the lth layer.
    • Example: a[4] is the 4th layer activation. W[5] and b[5] are the 5th layer parameters.
  • Superscript (i) denotes an object from the ith example.
    • Example: x(i) is the ith training example input.
  • Subscript i denotes the ith entry of a vector.
    • Example: ai[l] denotes the ith entry of the activations in layer l, assuming this is a fully connected (FC) layer.
  • nH, nW and nC denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer l, you can also write nH[l], nW[l], nC[l].
  • nHprev, nWprev and nCprev denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer l, this could also be denoted nH[l − 1], nW[l − 1], nC[l − 1].

Naming conventions

Hyperparameters

  • n_epochs
  • learning_rate, lr
  • epsilon

Parameters

  • features, inp, x, x_train, x_val, x_test
  • labels, y, y_train, y_val, y_test
  • weights, w, w1, w2, w3
  • bias, b, b1, b2, b3
  • z, z1, z2, z3
  • a, a1, a2, a3

Common tests

  1. Check gradients against a calculated finite-difference approximation
  2. Check shapes
  3. Logits range. If your model has a specific output range rather than linear, you can test to make sure that the range stays consistent. For example, if logits has a tanh output, all of our values should fall between 0 and 1.
  4. Input dependencies. Makes sure all of the variables in feed_dict affect the train_op.
  5. Variable change. Check variables you expect to train with each training op.

Good practices for tests:

  1. Keep them deterministic. If you really want randomized input, make sure to seed the random number so you can rerun the test easily.
  2. Keep the tests short. Don’t have a unit test that trains to convergence and checks against a validation set. You are wasting your own time if you do this.
  3. Make sure you reset the graph between each test.

Useful references

  1. How to test gradient implementations

Ideas

  • Turn trainers into generators, one epoch at a time

dltemplate's People

Contributors

markmo avatar

Stargazers

 avatar

Watchers

 avatar  avatar

Forkers

andrewlook

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.