aamini / introtodeeplearning Goto Github PK
View Code? Open in Web Editor NEWLab Materials for MIT 6.S191: Introduction to Deep Learning
License: MIT License
Lab Materials for MIT 6.S191: Introduction to Deep Learning
License: MIT License
Hi there!
I use Docker for Mac.
I ran the docker as follows:
docker run -p 8888:8888 -p 6006:6006 -v introtodeeplearning_labs:/notebooks/introtodeeplearning_labs mit6s191/iap2018:labs
But when I opened the browser, what I saw was an empty introtodeepleraning_labs
folder
I couldn't find lab1
or lab2
folders. What happened?
Thanks for any response!
"save_video_of_model(pong_model, "Pong-v0", filename='pong_agent.mp4')" will fail because
save_video_of_model does not call pre_process on the observations, the model expects a pre-processed tensor of size 80*80 and not the raw one that comes from the Pong-v0.
as a hack, I changed (just for the pong model)
action = model(tf.convert_to_tensor(obs.reshape((1,-1)), tf.float32)).numpy().argmax()
to
action = model(tf.convert_to_tensor(pre_process(obs).reshape((1,-1)), tf.float32)).numpy().argmax()
The Lab 3 file is not loading.
The error message displayed is:
"Sorry, something went wrong. Reload?"
I have checked my network connection and it seems fine.
Why we imported from tensorflow.keras import Model
Doesn't correlate what's the significance.
@aamini @dwanderton @aravic @apsoleimany
What a Keras dense layer do is this: output = activation(dot(input, kernel) + bias)
-(1) Noting that the matrix operation between the kernel (weights) and the input is a dot product NOT matrix multiplication.
Lab 1, section 1.3 graph and the code todo comment incorrectly says to use tf.matmul() which is matrix multiplication, instead it we should use the dot product function, as is the case with the Dense layer doc link below (1)
(1) https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable
when i run
!pip install mitdeeplearning
y_pred_standard = tf.round(tf.nn.sigmoid(standard_classifier.predict(batch_x)))
lab2 DebiasingSolution
running this command gives the above mentioned error. please help me through it
Hello,
After running the command
docker run -p 8888:8888 -p 6006:6006 -v /c//Users/User/Documents/MachineLearning/introtodeeplearning_labs-master:/notebooks/introtodeeplearning_labs mit6s191/iap2018:labs
where the github repository is in the MachineLearning
folder, I got naturally the message
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=63b505ca3081282bfd714df2036be073b825e7b07cc073b1
Then I followed the step 9 of the tutorial on Docker which is to replace localhost
with the IP of the machine 192.168.99.100
. So in the Edge browser I run
http://192.168.99.100:8888/?token=63b505ca3081282bfd714df2036be073b825e7b07cc073b1
But I got the error message "we can't reach this page"
Is there something that I did wrong ?
Hello. I'm running Lab3 in Google Colab. When I'm trying to execute the cell with video saving function, I get an AttributeError
Cell:
def save_video_of_model(model, env_name, filename='agent.mp4'):
import skvideo.io
from pyvirtualdisplay import Display
display = Display(visible=0, size=(40, 30))
display.start()
env = gym.make(env_name)
obs = env.reset()
shape = env.render(mode='rgb_array').shape[0:2]
out = skvideo.io.FFmpegWriter(filename)
done = False
while not done:
frame = env.render(mode='rgb_array')
out.writeFrame(frame)
action = model(tf.convert_to_tensor(obs.reshape((1,-1)), tf.float32)).numpy().argmax()
obs, reward, done, info = env.step(action)
out.close()
print "Successfully saved into {}!".format(filename)
save_video_of_model(cartpole_model, "CartPole-v0")
Error:
W0716 08:40:55.591255 140415180294016 abstractdisplay.py:151] xdpyinfo was not found, X start can not be checked! Please install xdpyinfo!
AttributeErrorTraceback (most recent call last)
<ipython-input-25-e18a54b970ec> in <module>()
21 print "Successfully saved into {}!".format(filename)
22
---> 23 save_video_of_model(cartpole_model, "CartPole-v0")
3 frames
<ipython-input-25-e18a54b970ec> in save_video_of_model(model, env_name, filename)
7 env = gym.make(env_name)
8 obs = env.reset()
----> 9 shape = env.render(mode='rgb_array').shape[0:2]
10
11 out = skvideo.io.FFmpegWriter(filename)
/usr/local/lib/python2.7/dist-packages/gym/core.pyc in render(self, mode, **kwargs)
274
275 def render(self, mode='human', **kwargs):
--> 276 return self.env.render(mode, **kwargs)
277
278 def close(self):
/usr/local/lib/python2.7/dist-packages/gym/envs/classic_control/cartpole.pyc in render(self, mode)
186 self.poletrans.set_rotation(-x[2])
187
--> 188 return self.viewer.render(return_rgb_array = mode=='rgb_array')
189
190 def close(self):
/usr/local/lib/python2.7/dist-packages/gym/envs/classic_control/rendering.pyc in render(self, return_rgb_array)
94 buffer = pyglet.image.get_buffer_manager().get_color_buffer()
95 image_data = buffer.get_image_data()
---> 96 arr = np.fromstring(image_data.data, dtype=np.uint8, sep='')
97 # In https://github.com/openai/gym-http-api/issues/2, we
98 # discovered that someone using Xmonad on Arch was having
AttributeError: 'ImageData' object has no attribute 'data'
Is there any plan to add a permissive license to this repo?
When executing "encoder_output = encoder(inputs)" (search for exact phrase), it fails with a type error:
TypeError: Failed to convert object of type <type 'tuple'> to Tensor. Contents: (Dimension(None), Dimension(100)). Consider casting elements to a supported type.
I'm using colab and following the recommended solution.
I propose
"epsilon = tf.random_normal(shape=(batch, dim))" be replaced with
"epsilon = tf.random_normal(shape=tf.shape(z_mean))"
I followed the remainder of the lab and got something that made sense.
lab1/Part1_tensorflow_solution.ipynb includes this code:
def our_dense_layer(x, n_in, n_out):
# ...
z = tf.matmul(x,W) + b
When this is called in the next cell, it produces this warning:
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:642: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
tf.Tensor([[0.95257413 0.95257413 0.95257413]], shape=(1, 3), dtype=float32)
This warning can be avoided by replacing the +b
code segment with tf.add
, i.e.
z = tf.add(tf.matmul(x, W), b, name="z")
(also more TensorFlow-y)
Probably too minor to mention, but just wanted to point out in case people notice it: since the notebooks indent with two spaces and Colab expects four spaces by default, Colab will make indented text red as a warning. e.g. from lab1/Part1_tensorflow.ipynb:
Setting the indentation width to 2 spaces fixes this (Tools...Preferences...Indentation width in spaces):
Hello,
I have tried to recreate Lab1 Music Generation with RNNs and I got "Found 0 songs in text" after running the last piece of code.
At first, I thought that it is my fault but then I just copied all the solution code and still got that message.
Any ideas how to fix it?
upon importing mitdeeplearning 0.1.2 no file named 'cv2' found for mitdeeplearning.lab2 import.
ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import mitdeeplearning as mdl
~\AppData\Roaming\Python\Python37\site-packages\mitdeeplearning_init_.py in
2
3 import mitdeeplearning.lab1
----> 4 import mitdeeplearning.lab2
5 import mitdeeplearning.lab3
~\AppData\Roaming\Python\Python37\site-packages\mitdeeplearning\lab2.py in
----> 1 import cv2
2 import os
3 import matplotlib.pyplot as plt
4 import numpy as np
5 import tensorflow as tf
ModuleNotFoundError: No module named 'cv2'
The pattern '\n\n(.*?)\n\n' can't extract the first song and the last song (if full generated), since we use start string 'X' rather than '\n\nX'
Why exploration is not used initially while using policy gradient in deep reinforcement learning.
Running lab1/Part2_music_generation.ipynb fails at the beginning because it requests 1.13.0. Looks like Colab updated to 1.13.1 (I didn't do anything special).
is_correct_tf_version = '1.13.0' in tf.__version__
...
AssertionError: Wrong tensorflow version (1.13.1) installed
Replacing that with this will work, unless that specific version is important:
is_correct_tf_version = tf.__version__ >= '1.13.0'
Hi there,
it's more a question, not sure if it's an issue.
In Lab2 Part 1, two network types are analyzed: Fully Connected and CNN.
testing both with the test images show much better results with Fully then with CNN.
I tried changing parameters (learning rate and optimizer) but it didn't change so much the results.
CNN showed 8 correct estimations out of 20 test images.
Fully showed 19 correct estimations out of 20 test images.
I was expecting CNN to show better results, I thought it was more appropriate for vision applications.
Did I do something wrong?
Thanks and regards,
Cassiano
The link URL has two "https://" and is broken as a result.
I am encountering an InvalidArgumentError
error when I use Keras model.predict with tf.constant as input. I am not sure whether it's because model.predict don't work with tf.constant or I am doing something wrong. It work fine when I use numpy array with same argument.
# Define the number of inputs and outputs
n_input_nodes = 2
n_output_nodes = 3
# First define the model
model = Sequential()
'''TODO: Define a dense (fully connected) layer to compute z'''
# Remember: dense layers are defined by the parameters W and b!
# You can read more about the initialization of W and b in the TF documentation :)
dense_layer = Dense(n_output_nodes, input_shape=(n_input_nodes,),activation='sigmoid') # TODO
# Add the dense layer to the model
model.add(dense_layer)
Now when I do prediction using:
# Test model with example input
x_input = tf.constant([[1.0,2.]], shape=(1,2))
'''TODO: feed input into the model and predict the output!'''
print(model.predict(x_input)) # TODO
I get following error:
InvalidArgumentError: In[0] is not a matrix. Instead it has shape [2]
[[{{node MatMul_3}}]] [Op:StatefulPartitionedCall]
When I use Numpy array, it works:
# Test model with example input
x_input =np.array([[1.0,2.]])
'''TODO: feed input into the model and predict the output!'''
print(model.predict(x_input)) # TODO
[[0.19114174 0.88079417 0.8062956 ]]
Could you let me know if it is a tf issue. If so I can raise an issue on the tf repository.
basename = (song)name . I m getting syntax error in this line.
Hello Team,
I was trying to practice your Lab1 music generation notebook. But I am unable to import or load data through following code:
# download data
songs = mdl.lab1.load_training_data()
I got this below error:
AttributeError: module 'mitdeeplearning' has no attribute 'lab1'
Any help?
Regards,
Ankit
Hi amini,
I'm trying to pull the docker images by this link DockerHub, it seems that there're some problems. I try docker pull mit6s191/iap2018
in terminal, it gives me this message:
Using default tag: latest
Error response from daemon: manifest for mit6s191/iap2018:latest not found
then, I use docker pull mit6s191/iap2018:labs
, it shows:
error pulling image configuration: Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/48/48db1fb4c0b5d8aeb65b499c0623b057f6b50f93eed0e9cfb3f963b0c12a74db/data?Expires=1524752941&Signature=AKuwnCd69y-fs0NlLjQnAlBoUhbht-gWbIYIoIESf7dERzjlkeejUndYC1QCnEhjjlhZAvv2NWQFWEf-Efc6noGUV9hK4QRVaQqO23zRKRrqarTWVMLj5LQX4X1Qikze5YEXy4VqdNm5t88WRQsfDvsPHHDmKx6vqA2V4VgVDP8_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout
Could you please help to address this problem? Thank you.
I ran the code below with colab, but I didn't hear a sound.
!pip install mitdeeplearning
import mitdeeplearning as mdl
songs = mdl.lab1.load_training_data()
example_song = songs[0]
mdl.lab1.play_song(example_song)
The images in the notebooks go to relative links from the folder structure. This works when rendering in GitHub, but not in Colab. For example, from lab1/Part1_tensorflow.ipynb:
...
![alt text](img/add-graph.png "Computation Graph")
This can be fixed by pulling in the image path from the web, e.g.:
...
![alt text](https://github.com/aamini/introtodeeplearning_labs/raw/master/lab1/img/add-graph.png "Computation Graph")
Adding the https://github.com/aamini/introtodeeplearning_labs/raw/master/lab1/
(or lab2
etc.) prefix to images' path in the notebook will ensure they render in both GitHub and Colab.
I'm getting this error in most of the notebooks.
Stacktrace in Colab ->
ModuleNotFoundError Traceback (most recent call last)
in ()
14
15 # Import the necessary class-specific utility files for this lab
---> 16 import introtodeeplearning_labs as util
/content/introtodeeplearning_labs/init.py in ()
----> 1 from lab1 import *
2 from lab2 import *
3 # from lab3 import *
4
5
ModuleNotFoundError: No module named 'lab1'
Windows 10 Docker install question
In Lab 2 part 1, the formula for calculating the number of nodes in a feature map after a convolution or a pooling operation
is given as n = (((d - i + 2p) / s) + 1)^2)
. Shouldn't it be without the square
operation?
Hey,
Why am i having an error with this loc "import util.download_lung_data", is it correct?? I am getting an error of no module named util
Thanks
I am using VSCode and when I try to play any song even after using
apt-get install abcmidi timidity > /dev/null 2>&1
I get this error
rm: tmp.mid: No such file or directory
Hi i get error from this line in lab 1.
dense_layer = Dense(n_output_nodes, input_shape=(n_input_nodes,),activation='sigmoid')
Error;
ValueError: When using data tensors as input to a model, you should specify the
steps
argument.
when I run the following:
!apt-get install abcmidi timidity > /dev/null 2>&1
I received:
The system cannot find the path specified.
I am trying to try the notebooks posted here as per the instructions given in the readme here as:
sudo docker run -p 8888:8888 -p 6006:6006 -v https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs mit6s191/iap2018:labs
when I get the error:
docker: Error response from daemon: invalid bind mount spec "https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs": invalid mode: /notebooks/introtodeeplearning_labs.
Am I missing the path for the repo?
Here is the input:
class OurDenseLayer(tf.keras.layers.Layer):
def __init__(self, n_output_nodes):
super(OurDenseLayer, self).__init__()
self.n_output_nodes = n_output_nodes
def build(self, input_shape):
d = int(input_shape[-1])
# Define and initialize parameters: a weight matrix W and bias b
# Note that parameter initialization is random!
self.W = self.add_weight("weight", shape=[d, self.n_output_nodes]) # note the dimensionality
self.b = self.add_weight("bias", shape=[1, self.n_output_nodes]) # note the dimensionality
def call(self, x):
'''TODO: define the operation for z (hint: use tf.matmul)'''
z = tf.matmul(input, self.W) + self.b
'''TODO: define the operation for out (hint: use tf.sigmoid)'''
y = tf.sigmoid(z)
return y
# Since layer parameters are initialized randomly, we will set a random seed for reproducibility
tf.random.set_seed(1)
layer = OurDenseLayer(3)
layer.build((1,2))
x_input = tf.constant([[1,2.]], shape=(1,2))
y = layer.call(x_input)
# test the output!
print(y.numpy())
mdl.lab1.test_custom_dense_layer_output(y)
and here is the output with errors:
ValueError Traceback (most recent call last)
<ipython-input-32-67151a0fc768> in <module>()
30 layer.build((1,2))
31 x_input = tf.constant([[1,2.]], shape=(1,2))
---> 32 y = layer.call(x_input)
33
34 # test the output!
7 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
95 ctx.ensure_initialized()
---> 96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
98
ValueError: Attempt to convert a value (<bound method Kernel.raw_input of <google.colab._kernel.Kernel object at 0x7ff78e56c198>>) with an unsupported type (<class 'method'>) to a Tensor.
Why I am getting Value Error.
In the below code, I can see the activation function and the weights, but where exactly do we add the bias?
# Import relevant packages
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# Define the number of inputs and outputs
n_input_nodes = 2
n_output_nodes = 3
# First define the model
model = Sequential()
'''TODO: Define a dense (fully connected) layer to compute z'''
# Remember: dense layers are defined by the parameters W and b!
# You can read more about the initialization of W and b in the TF documentation :)
dense_layer = Dense(n_output_nodes, input_shape=(n_input_nodes,),activation='sigmoid')
# Add the dense layer to the model
model.add(dense_layer)
Hi,
in Lab2, in the beginning of part 2. When i called PPBFaceEvaluator. I got error like below
**---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
----> 1 ppb = util.PPBFaceEvaluator() #create dataset handler
~\Python\introtodeeplearning_labs\lab2\util.py in init(self, skip)
72 with open(ppb_anno) as f:
73 for line in f.read().split('\r'):
---> 74 ind, name, gender, numeric, skin, country = line.split(',')
75 self.anno_dict[name] = (gender.lower(),skin.lower())
76
ValueError: too many values to unpack (expected 6)**
I use Python 3.7 run on Jupyter Notebook
Thank you in advance
I trained it for longer time but i am getting 0 songs extracted .please at least update the loss graph.
Gettting No module named 'tensorflow_core.estimator' error while running util.play_generated_song(text), though i have installed tensorflow
Error while calling encode and decoder network functions.
Can't convert type to tensors.
Hi, quick solution for importing this modules?
ModuleNotFoundError Traceback (most recent call last)
in ()
8 from IPython import display as ipythondisplay
9
---> 10 import introtodeeplearning_labs as util
11
12 is_correct_tf_version = '1.14.0' in tf.version
/content/introtodeeplearning_labs/init.py in ()
----> 1 from lab1 import *
2 from lab2 import *
3 # from lab3 import *
4
5
Will you please post a link to the free docker image on dockerhub? I cannot find it.
I have a miniconda installation with Python3.6.5.
I tried installing python-midi using pip.
The package fails with an error.
Using cached https://files.pythonhosted.org/packages/8d/e1/fd34aa05508d907449fb2d66a679d4f98eeeacdb4b3c7e6af87d91c4fa21/python-midi-v0.2.4.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\shrung\AppData\Local\Temp\pip-install-jyfi7tsr\python-midi\setup.py", line 42
print "No sequencer available for '%s' platform." % platform
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("No sequencer available for '%s' platform." % platform)?
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\shrung\AppData\Local\Temp\pip-install-jyfi7tsr\python-midi\
I have a few questions:
In lab2/Part2_Debiasing.ipynb, when viewed through github, the equations for vae_loss_function are not rendered as latex. They appear as the raw latex strings.
Hi
I think that it is a basic step, I was googling about it but i can't find what is
the correct path to this github repo
to be replace in docker command.
Could anyone help me with this?
Thank you in advance
M
the code
mdl.lab1.play_song( )
is not responsive on local host. No error nor long wait, it simply completes tasks and does nothing
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.