Coder Social home page Coder Social logo

the-deep-learners / deep-learning-illustrated Goto Github PK

View Code? Open in Web Editor NEW
691.0 43.0 347.0 15.83 MB

Deep Learning Illustrated (2020)

Home Page: https://www.deeplearningillustrated.com

License: MIT License

Dockerfile 0.04% Python 0.04% Shell 0.13% Jupyter Notebook 99.79% Batchfile 0.01%

deep-learning-illustrated's Introduction

Deep Learning Illustrated (2020)

This repository is home to the code that accompanies Jon Krohn, Grant Beyleveld and Aglaé Bassens' book Deep Learning Illustrated. This visual, interactive guide to artificial neural networks was published on Pearson's Addison-Wesley imprint.

Installation

Step-by-step guides for running the code in this repository can be found in the installation directory. For installation difficulties, please consider visiting our book's Q&A forum instead of creating an Issue.

Notebooks

All of the code covered in the book can be found in the notebooks directory as Jupyter notebooks.

Below is the book's table of contents with links to all of the individual notebooks.

Note that while TensorFlow 2.0 was released after the book had gone to press, as detailed in Chapter 14 (specifically, Example 14.1), all of our notebooks can be trivially converted into TensorFlow 2.x code if desired. Failing that, TensorFlow 2.x analogs of the notebooks in the current repo are available here.

Part 1: Introducing Deep Learning

Chapter 1: Biological and Machine Vision

  • Biological Vision
  • Machine Vision
    • The Neocognitron
    • LeNet-5
    • The Traditional Machine Learning Approach
    • ImageNet and the ILSVRC
    • AlexNet
  • TensorFlow PlayGround
  • The Quick, Draw! Game

Chapter 2: Human and Machine Language

  • Deep Learning for Natural Language Processing
    • Deep Learning Networks Learn Representations Automatically
    • A Brief History of Deep Learning for NLP
  • Computational Representations of Language
    • One-Hot Representations of Words
    • Word Vectors
    • Word Vector Arithmetic
    • word2viz
    • Localist Versus Distributed Representations
  • Elements of Natural Human Language
  • Google Duplex

Chapter 3: Machine Art

  • A Boozy All-Nighter
  • Arithmetic on Fake Human Faces
  • Style Transfer: Converting Photos into Monet (and Vice Versa)
  • Make Your Own Sketches Photorealistic
  • Creating Photorealistic Images from Text
  • Image Processing Using Deep Learning

Chapter 4: Game-Playing Machines

  • Deep Learning, AI, and Other Beasts
    • Artificial Intelligence
    • Machine Learning
    • Representation Learning
    • Artificial Neural Networks
  • Three Categories of Machine Learning Problems
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
  • Deep Reinforcement Learning
  • Video Games
  • Board Games
    • AlphaGo
    • AlphaGo Zero
    • AlphaZero
  • Manipulation of Objects
  • Popular Reinforcement Learning Environments
    • OpenAI Gym
    • DeepMind Lab
    • Unity ML-Agents
  • Three Categories of AI
    • Artificial Narrow Intelligence
    • Artificial General Intelligence
    • Artificial Super Intelligence

Part II: Essential Theory Illustrated

Chapter 5: The (Code) Cart Ahead of the (Theory) Horse

  • Prerequisites
  • Installation
  • A Shallow Neural Network in Keras (shallow_net_in_keras.ipynb)
    • The MNIST Handwritten Digits (mnist_digit_pixel_by_pixel.ipynb)
    • A Schematic Diagram of the Network
    • Loading the Data
    • Reformatting the Data
    • Designing a Neural Network Architecture
    • Training a Deep Learning Model

Chapter 6: Artificial Neurons Detecting Hot Dogs

  • Biological Neuroanatomy 101
  • The Perceptron
    • The Hot Dog / Not Hot Dog Detector
    • The Most Important Equation in the Book
  • Modern Neurons and Activation Functions
  • Choosing a Neuron

Chapter 7: Artificial Neural Networks

  • The Input Layer
  • Dense Layers
  • A Hot Dog-Detecting Dense Network
    • Forward Propagation through the First Hidden Layer
    • Forward Propagation through Subsequent Layers
  • The Softmax Layer of a Fast Food-Classifying Network (softmax_demo.ipynb)
  • Revisiting our Shallow Neural Network

Chapter 8: Training Deep Networks

Chapter 9: Improving Deep Networks

Part III: Interactive Applications of Deep Learning

Chapter 10: Machine Vision

  • Convolutional Neural Networks
    • The Two-Dimensional Structure of Visual Imagery
    • Computational Complexity
    • Convolutional Layers
    • Multiple Filters
    • A Convolutional Example
    • Convolutional Filter Hyperparameters
    • Stride Length
    • Padding
  • Pooling Layers
  • LeNet-5 in Keras (lenet_in_keras.ipynb)
  • AlexNet (alexnet_in_keras.ipynb) and VGGNet (vggnet_in_keras.ipynb)
  • Residual Networks
    • Vanishing Gradients: The Bête Noire of Deep CNNs
    • Residual Connection
  • Applications of Machine Vision

Chapter 11: Natural Language Processing

Chapter 12: Generative Adversarial Networks

Chapter 13: Deep Reinforcement Learning

  • Essential Theory of Reinforcement Learning
    • The Cart-Pole Game
    • Markov Decision Processes
    • The Optimal Policy
  • Essential Theory of Deep Q-Learning Networks
    • Value Functions
    • Q-Value Functions
    • Estimating an Optimal Q-Value
  • Defining a DQN Agent (cartpole_dqn.ipynb)
    • Initialization Parameters
    • Building the Agent’s Neural Network Model
    • Remembering Gameplay
    • Training via Memory Replay
    • Selecting an Action to Take
    • Saving and Loading Model Parameters
  • Interacting with an OpenAI Gym Environment
  • Hyperparameter Optimization with SLM Lab
  • Agents Beyond DQN
    • Policy Gradients and the REINFORCE Algorithm
    • The Actor-Critic Algorithm

Part IV: You and AI

Chapter 14: Moving Forward with Your Own Deep Learning Projects

  • Ideas for Deep Learning Projects
  • Resources for Further Projects
    • Socially-Beneficial Projects
  • The Modeling Process, including Hyperparameter Tuning
    • Automation of Hyperparameter Search
  • Deep Learning Libraries
  • Software 2.0
  • Approaching Artificial General Intelligence

Book Cover

deep-learning-illustrated's People

Contributors

grantbey avatar illustrated-series avatar jonkrohn avatar tromgy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-illustrated's Issues

natural_language_preprocessing.ipynb uses older attributes with gensim library for training word2vec

  1. In section Run Word2Vec of notebook natural_language_preprocessing.ipynb
    model = Word2Vec(sentences=clean_sents, size=64, sg=1, window=10, iter=5, min_count=10, workers=4)
    is given, instead it should be
    model = Word2Vec(sentences=clean_sents, vector_size=64, sg=1, window=10, epochs=5, min_count=10, workers=4)

  2. model.wv.vocab should be replaced with model.wv.vectors

  3. model.wv.vocab.keys should be replaced with model.wv.index_to_key
    I believe the author used Gensim 3.x and the latest Gensim is 4.x

Keras reports a warning in the "generative_adversarial_network" notebook and it fails to train.

When running "generative_adversarial_network" notebook on the "apple" dataset (downloaded from Google cloud and renamed apple.npy to match the path in the notebook), I see this warning:

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py:478: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set `model.trainable` without calling `model.compile` after ?
  'Discrepancy between trainable weights and collected trainable'

and the network fails to train as these two images show:

gan-epoch-99

gan-epoch-1999

So, trying to get rid of the warning I added the following code:

discriminator.compile(loss='binary_crossentropy', 
                      optimizer=RMSprop(lr=0.0008, 
                                        decay=6e-8, 
                                        clipvalue=1.0), 
                      metrics=['accuracy'])

right after

discriminator.trainable = False

(which was in cell 18) but that just made the network learn making very sharp noise:

recompiled-epoch-99

recompiled-gan-epoch-1999

So it seemed that generator was learning, but discriminator was not, which was not surprising because we had never turned the trainable parameter back on again!

I then tried to introduce the toggle inside the loop in the train function, so that it would set discriminator.trainable = true, recompile discriminator, train it, and then set discriminator.trainable = false and again recompile discriminator. But that process leads to unbounded memory growth, and the kernel crashes around the 150th iteration (I gave Docker 14 GB), so I was never able to see if there was any change between 99th and 199th steps.

Slight mismatches between book and code

Enjoying the book.

Found a couple of small issues with using TensorBoard according to the instructions on page 152 of the book.

  1. The --logdir='logs/deep-net' option in step 2 should be --logdir='work/notebooks/logs/deep-net' as the code in the notebook creates the logs directory in the notebooks directory.
  2. The docker run command needs -p 6006:6006 added to allow a browser on the host machine to access TensorBoard e.g. docker run -v %cd%:/home/jovyan/work -it --rm -p 8888:8888 -p 6006:6006 dli-stack. Could make the changes in the rundocker.bat and rundocker.sh files.

AlexNet

Hi,
I was running example 10.4. Unfortunately I got bad results (see below) and I don't know why. Probably I did something wrong.
Does somebody have an idea, what could be wrong ?
Thanks for help
BR
Thomas

import keras
import tensorflow.compat.v1

from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization

import numpy as np

!rm oxflower17*
!wget https://bit.ly/36QytdH -O oxflower17.npz

data = np.load('oxflower17.npz')
X = data['X']
Y = data['Y']

#import tflearn.datasets.oxflower17 as oxflower17
#X, Y = oxflower17.load_data(one_hot=True)

model = Sequential()

model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Conv2D(256, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Conv2D(256, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))

model.add(Dense(17, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, batch_size=64, epochs=100, verbose=1, validation_split=0.1, shuffle=True)

Train on 1224 samples, validate on 136 samples
2023-05-01 11:29:28.100050: W tensorflow/c/c_api.cc:300] Operation '{name:'training_4/Adam/conv2d_12/bias/v/Assign' id:3526 op device:{requested: '', assigned: ''} def:{{{node training_4/Adam/conv2d_12/bias/v/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](training_4/Adam/conv2d_12/bias/v, training_4/Adam/conv2d_12/bias/v/Initializer/zeros)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
Epoch 1/100
1224/1224 [==============================] - ETA: 0s - loss: 5.0144 - acc: 0.1797
2023-05-01 11:29:56.126373: W tensorflow/c/c_api.cc:300] Operation '{name:'loss_2/mul' id:3001 op device:{requested: '', assigned: ''} def:{{{node loss_2/mul}} = Mul[T=DT_FLOAT, _has_manual_control_dependencies=true](loss_2/mul/x, loss_2/dense_8_loss/value)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
1224/1224 [==============================] - 29s 23ms/sample - loss: 5.0144 - acc: 0.1797 - val_loss: 10.0863 - val_acc: 0.0588
Epoch 2/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 3.2775 - acc: 0.2614 - val_loss: 5.5473 - val_acc: 0.0662
Epoch 3/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 2.6932 - acc: 0.3219 - val_loss: 6.2977 - val_acc: 0.1397
Epoch 4/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 2.4534 - acc: 0.3562 - val_loss: 2.6887 - val_acc: 0.3088
Epoch 5/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 2.3171 - acc: 0.3913 - val_loss: 3.6668 - val_acc: 0.2206
Epoch 6/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 2.4160 - acc: 0.3856 - val_loss: 3.5026 - val_acc: 0.2794
Epoch 7/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 2.3011 - acc: 0.4093 - val_loss: 4.1973 - val_acc: 0.1838
Epoch 8/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 2.0330 - acc: 0.4592 - val_loss: 3.0618 - val_acc: 0.3088
Epoch 9/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 2.1806 - acc: 0.4338 - val_loss: 2.5885 - val_acc: 0.3162
Epoch 10/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 2.2620 - acc: 0.4232 - val_loss: 5.1870 - val_acc: 0.1985
Epoch 11/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 1.9249 - acc: 0.4861 - val_loss: 2.3710 - val_acc: 0.3676
Epoch 12/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 1.7639 - acc: 0.5163 - val_loss: 2.6669 - val_acc: 0.4338
Epoch 13/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 1.6767 - acc: 0.5498 - val_loss: 3.8777 - val_acc: 0.3088
Epoch 14/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.9236 - acc: 0.5106 - val_loss: 3.1937 - val_acc: 0.3529
Epoch 15/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.6463 - acc: 0.5564 - val_loss: 2.9450 - val_acc: 0.3971
Epoch 16/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 1.7758 - acc: 0.5507 - val_loss: 3.0172 - val_acc: 0.3676
Epoch 17/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 1.6831 - acc: 0.5253 - val_loss: 3.9578 - val_acc: 0.3235
Epoch 18/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.4820 - acc: 0.5874 - val_loss: 3.1532 - val_acc: 0.4338
Epoch 19/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.4646 - acc: 0.6013 - val_loss: 2.7689 - val_acc: 0.4338
Epoch 20/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.2729 - acc: 0.6275 - val_loss: 3.3961 - val_acc: 0.3824
Epoch 21/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.1467 - acc: 0.6830 - val_loss: 2.8041 - val_acc: 0.4926
Epoch 22/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.3285 - acc: 0.6405 - val_loss: 2.1994 - val_acc: 0.4485
Epoch 23/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.1406 - acc: 0.6593 - val_loss: 6.7718 - val_acc: 0.1765
Epoch 24/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 1.0974 - acc: 0.6944 - val_loss: 2.9021 - val_acc: 0.4338
Epoch 25/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 1.2066 - acc: 0.6667 - val_loss: 3.7668 - val_acc: 0.3750
Epoch 26/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 1.0289 - acc: 0.7092 - val_loss: 2.9394 - val_acc: 0.5221
Epoch 27/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.1849 - acc: 0.6904 - val_loss: 2.5308 - val_acc: 0.5147
Epoch 28/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 1.0187 - acc: 0.6969 - val_loss: 3.2923 - val_acc: 0.4265
Epoch 29/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.3484 - acc: 0.6569 - val_loss: 5.8530 - val_acc: 0.3750
Epoch 30/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 1.2807 - acc: 0.6528 - val_loss: 3.0352 - val_acc: 0.4926
Epoch 31/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.2681 - acc: 0.6904 - val_loss: 3.7156 - val_acc: 0.4338
Epoch 32/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 1.1128 - acc: 0.7042 - val_loss: 2.4365 - val_acc: 0.4706
Epoch 33/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 1.1916 - acc: 0.7034 - val_loss: 3.5872 - val_acc: 0.4412
Epoch 34/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.8485 - acc: 0.7475 - val_loss: 2.3067 - val_acc: 0.5662
Epoch 35/100
1224/1224 [==============================] - 34s 28ms/sample - loss: 0.7040 - acc: 0.7998 - val_loss: 2.5718 - val_acc: 0.6029
Epoch 36/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.7632 - acc: 0.7908 - val_loss: 2.0461 - val_acc: 0.5662
Epoch 37/100
1224/1224 [==============================] - 31s 26ms/sample - loss: 0.8698 - acc: 0.7647 - val_loss: 2.5574 - val_acc: 0.5662
Epoch 38/100
1224/1224 [==============================] - 35s 29ms/sample - loss: 0.7660 - acc: 0.7917 - val_loss: 2.2289 - val_acc: 0.6324
Epoch 39/100
1224/1224 [==============================] - 31s 26ms/sample - loss: 0.4683 - acc: 0.8554 - val_loss: 2.8544 - val_acc: 0.5662
Epoch 40/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.5265 - acc: 0.8472 - val_loss: 2.8920 - val_acc: 0.5882
Epoch 41/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.5529 - acc: 0.8423 - val_loss: 2.8437 - val_acc: 0.5515
Epoch 42/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.7684 - acc: 0.8015 - val_loss: 3.2597 - val_acc: 0.4926
Epoch 43/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.4530 - acc: 0.8611 - val_loss: 3.5480 - val_acc: 0.5147
Epoch 44/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.5590 - acc: 0.8391 - val_loss: 3.4603 - val_acc: 0.5368
Epoch 45/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 1.1829 - acc: 0.7533 - val_loss: 4.1868 - val_acc: 0.4779
Epoch 46/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 0.8601 - acc: 0.7802 - val_loss: 3.2483 - val_acc: 0.5294
Epoch 47/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 1.0455 - acc: 0.7753 - val_loss: 4.0359 - val_acc: 0.4485
Epoch 48/100
1224/1224 [==============================] - 34s 28ms/sample - loss: 0.5997 - acc: 0.8284 - val_loss: 3.3438 - val_acc: 0.4632
Epoch 49/100
1224/1224 [==============================] - 36s 29ms/sample - loss: 0.5014 - acc: 0.8636 - val_loss: 3.3794 - val_acc: 0.5588
Epoch 50/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.5341 - acc: 0.8415 - val_loss: 3.1851 - val_acc: 0.5441
Epoch 51/100
1224/1224 [==============================] - 34s 28ms/sample - loss: 0.5075 - acc: 0.8570 - val_loss: 3.5522 - val_acc: 0.5662
Epoch 52/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 0.3198 - acc: 0.9011 - val_loss: 2.8503 - val_acc: 0.6250
Epoch 53/100
1224/1224 [==============================] - 34s 27ms/sample - loss: 0.4718 - acc: 0.8775 - val_loss: 3.4969 - val_acc: 0.5368
Epoch 54/100
1224/1224 [==============================] - 37s 30ms/sample - loss: 0.2778 - acc: 0.9142 - val_loss: 2.2732 - val_acc: 0.6618
Epoch 55/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 0.3239 - acc: 0.9109 - val_loss: 2.8575 - val_acc: 0.6029
Epoch 56/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.2875 - acc: 0.9142 - val_loss: 3.0224 - val_acc: 0.6324
Epoch 57/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.2683 - acc: 0.9191 - val_loss: 3.2735 - val_acc: 0.6176
Epoch 58/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 0.3201 - acc: 0.9167 - val_loss: 3.2597 - val_acc: 0.6176
Epoch 59/100
1224/1224 [==============================] - 31s 26ms/sample - loss: 0.2960 - acc: 0.9150 - val_loss: 2.7775 - val_acc: 0.6250
Epoch 60/100
1224/1224 [==============================] - 34s 28ms/sample - loss: 0.4481 - acc: 0.9069 - val_loss: 4.2904 - val_acc: 0.5662
Epoch 61/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.7816 - acc: 0.8342 - val_loss: 3.3295 - val_acc: 0.5368
Epoch 62/100
1224/1224 [==============================] - 30s 25ms/sample - loss: 0.5890 - acc: 0.8538 - val_loss: 4.1752 - val_acc: 0.5074
Epoch 63/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.4552 - acc: 0.8824 - val_loss: 2.8554 - val_acc: 0.6103
Epoch 64/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.2289 - acc: 0.9273 - val_loss: 2.8480 - val_acc: 0.6912
Epoch 65/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 0.1800 - acc: 0.9428 - val_loss: 3.6724 - val_acc: 0.5882
Epoch 66/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.7502 - acc: 0.8342 - val_loss: 3.2567 - val_acc: 0.6029
Epoch 67/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.4260 - acc: 0.8766 - val_loss: 2.8419 - val_acc: 0.6324
Epoch 68/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.2063 - acc: 0.9355 - val_loss: 3.3383 - val_acc: 0.5882
Epoch 69/100
1224/1224 [==============================] - 36s 29ms/sample - loss: 0.1641 - acc: 0.9485 - val_loss: 2.8249 - val_acc: 0.6691
Epoch 70/100
1224/1224 [==============================] - 35s 28ms/sample - loss: 0.1430 - acc: 0.9567 - val_loss: 3.3131 - val_acc: 0.6397
Epoch 71/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.4220 - acc: 0.9101 - val_loss: 3.6871 - val_acc: 0.5882
Epoch 72/100
1224/1224 [==============================] - 33s 27ms/sample - loss: 0.1828 - acc: 0.9493 - val_loss: 3.9111 - val_acc: 0.5515
Epoch 73/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.2159 - acc: 0.9387 - val_loss: 3.7375 - val_acc: 0.5735
Epoch 74/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.1960 - acc: 0.9436 - val_loss: 4.8723 - val_acc: 0.5368
Epoch 75/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2561 - acc: 0.9314 - val_loss: 3.0616 - val_acc: 0.6471
Epoch 76/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2306 - acc: 0.9428 - val_loss: 3.3835 - val_acc: 0.6691
Epoch 77/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2370 - acc: 0.9412 - val_loss: 3.3218 - val_acc: 0.6985
Epoch 78/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2234 - acc: 0.9428 - val_loss: 3.1847 - val_acc: 0.6691
Epoch 79/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2291 - acc: 0.9420 - val_loss: 4.4569 - val_acc: 0.6250
Epoch 80/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.5044 - acc: 0.9036 - val_loss: 4.6910 - val_acc: 0.5588
Epoch 81/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2572 - acc: 0.9338 - val_loss: 3.9741 - val_acc: 0.6103
Epoch 82/100
1224/1224 [==============================] - 32s 26ms/sample - loss: 0.2419 - acc: 0.9412 - val_loss: 3.5022 - val_acc: 0.6250
Epoch 83/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.1991 - acc: 0.9469 - val_loss: 3.2057 - val_acc: 0.6618
Epoch 84/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 0.1001 - acc: 0.9706 - val_loss: 3.3824 - val_acc: 0.6912
Epoch 85/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.1351 - acc: 0.9681 - val_loss: 3.4742 - val_acc: 0.6765
Epoch 86/100
1224/1224 [==============================] - 31s 25ms/sample - loss: 0.3279 - acc: 0.9322 - val_loss: 3.4699 - val_acc: 0.6397
Epoch 87/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.4962 - acc: 0.8954 - val_loss: 4.7173 - val_acc: 0.5368
Epoch 88/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2327 - acc: 0.9371 - val_loss: 2.9077 - val_acc: 0.6250
Epoch 89/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2620 - acc: 0.9355 - val_loss: 3.6844 - val_acc: 0.6618
Epoch 90/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.1095 - acc: 0.9624 - val_loss: 3.5161 - val_acc: 0.6618
Epoch 91/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.1265 - acc: 0.9665 - val_loss: 3.7706 - val_acc: 0.6250
Epoch 92/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.0984 - acc: 0.9673 - val_loss: 3.7893 - val_acc: 0.6544
Epoch 93/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 0.1135 - acc: 0.9657 - val_loss: 3.6777 - val_acc: 0.6691
Epoch 94/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.0455 - acc: 0.9894 - val_loss: 4.2340 - val_acc: 0.6103
Epoch 95/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.4316 - acc: 0.9265 - val_loss: 3.9547 - val_acc: 0.5515
Epoch 96/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.1366 - acc: 0.9608 - val_loss: 3.8538 - val_acc: 0.6324
Epoch 97/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.3379 - acc: 0.9297 - val_loss: 4.1262 - val_acc: 0.6250
Epoch 98/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.3417 - acc: 0.9289 - val_loss: 6.9736 - val_acc: 0.3676
Epoch 99/100
1224/1224 [==============================] - 29s 24ms/sample - loss: 0.2646 - acc: 0.9338 - val_loss: 6.9945 - val_acc: 0.4853
Epoch 100/100
1224/1224 [==============================] - 30s 24ms/sample - loss: 0.5726 - acc: 0.8881 - val_loss: 5.9798 - val_acc: 0.5147
<keras.callbacks.History at 0x7f1b1ca7b160>

tflearn and tensorflow 2.0

Hi Jon,

First of all, great book, thank you for putting it together!
It's a great resource, easy to read and very well structured!

I noticed an issue with the alexnet_in_keras notebook.
The error I get (on my mac but also with your version hosted on Google Colab) is:

"---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 import tflearn.datasets.oxflower17 as oxflower17
2 X, Y = oxflower17.load_data(one_hot=True)

2 frames
/usr/local/lib/python3.6/dist-packages/tflearn/variables.py in ()
5 import tflearn
6
----> 7 from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
8 from tensorflow.python.framework import ops
9 from tensorflow.python.ops import variable_scope

ModuleNotFoundError: No module named 'tensorflow.contrib'"

I believe this is because tflearn won't work with tensorflow 2.0.
(a possible explanation here: tensorflow/tensorflow#30794)
This is certainly because you wrote your notebook before tensorflow 2.0 was available.

I am not using the Docker image you provided, and I know that would solve the problem!
However, do you intend to correct this kind of issue / maintain the examples in the future?

Thank you,
Bogdan

backpropagation on appendix B

Hi Jon,

I would like to double check with formula part of B.6 at the appendix, please. defining the gradient of the activation function w.r.t. z

CodeCogsEqn

Given B.2, I thought part of B.6 should be:

CodeCogsEqn (1)

Please check if it is correct and please do let me know otherwise.

Thank you in advance for your time and consideration.

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.