Coder Social home page Coder Social logo

cvpr2015's Introduction

Applied Deep Learning for Computer Vision with Torch

CVPR 2015, Boston, MA

###Slides and Notebooks

  • Slides
  • Amazon EC2 image with torch + itorch + Atari + notebooks can be launched from this link and the AMI ID is: ami-b36981d8

Notebooks:

cvpr2015's People

Contributors

carlobaldassi avatar carpedm20 avatar gforge avatar hevalazizoglu avatar koraykv avatar soumith avatar yosssi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cvpr2015's Issues

itorch image not displaying images

Hello,

when I use the following:

local screens = {}
for i=1,36 do
local screen, reward, terminal = game_env:step(game_actions[torch.random(3)])
table.insert(screens, screen[1]:clone())
end
itorch.image(screens)

A blank screen as follows appears, not the screens from the game.

download

Solving installation problems - Can't find kernel, missing env dependency.

Hey!
I have tried installing itorch on machines that had torch and jupyter running and kept going up against the problem of the final command of 'luarocks make' which spat out the following error.

$ sudo luarocks make

Missing dependencies for itorch:
env
torch >= 7.0
image

Error: Could not satisfy dependency: env

Or I had could start the iTorch profile on jupyter, but it could not find the kernel 5 times, and then it would die.

I solved this by using 'dos2unix' on the 'itorch' and 'itorch_launcher' in both the main install folder and in the torch/install/lib folders and bins.

It seems those files at some '\r' commands somewhere in there and they got interpreted in a weird way which didn't give an error but skipped on loading the kernel and basic deps.

...This took a lot of frustrating time, hope no other run into this. Happy learning!

Tutorial fails on the last line when trying to train with GPU

cunn: neural networks on GPUs using CUDA¶

In [60]:
require 'cunn';

The idea is pretty simple. Take a neural network, and transfer it over to GPU:

In [61]:
net = net:cuda()

Also, transfer the criterion to GPU:

In [62]:
criterion = criterion:cuda()

Ok, now the data:

In [63]:
trainset.data = trainset.data:cuda()

Okay, let's train on GPU :) #sosimple

In [64]:
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.001
trainer.maxIteration = 5 -- just do 5 epochs of training.
In [65]:
trainer:train(trainset)
Out[65]:
# StochasticGradient: training 
...rs/robsalz/torch/install/share/lua/5.1/nn/LogSoftMax.lua:4: attempt to call field 'LogSoftMax_updateOutput' (a nil value)
stack traceback:
    ...rs/robsalz/torch/install/share/lua/5.1/nn/LogSoftMax.lua:4: in function 'updateOutput'
    ...rs/robsalz/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
    ...lz/torch/install/share/lua/5.1/nn/StochasticGradient.lua:35: in function 'f'
    [string "local f = function() return trainer:train(tra..."]:1: in main chunk
    [C]: in function 'xpcall'
    /Users/robsalz/torch/install/share/lua/5.1/itorch/main.lua:179: in function </Users/robsalz/torch/install/share/lua/5.1/itorch/main.lua:143>
    /Users/robsalz/torch/install/share/lua/5.1/lzmq/poller.lua:75: in function 'poll'
    ...s/robsalz/torch/install/share/lua/5.1/lzmq/impl/loop.lua:307: in function 'poll'
    ...s/robsalz/torch/install/share/lua/5.1/lzmq/impl/loop.lua:325: in function 'sleep_ex'
    ...s/robsalz/torch/install/share/lua/5.1/lzmq/impl/loop.lua:370: in function 'start'
    /Users/robsalz/torch/install/share/lua/5.1/itorch/main.lua:350: in main chunk
    [C]: in function 'require'
    (command line):1: in main chunk
    [C]: at 0x010a62bbd0

Error when calling :forward()

When running the example of the tutorial "Deep Learning with Torch" I got this error when calling :forward() function.

predicted = net:forward(testset.data[100])
Channel 1, Mean: 125.83175029297
Channel 1, Standard Deviation: 63.143400842609
Channel 2, Mean: 123.26066621094
Channel 2, Standard Deviation: 62.369209019002
Channel 3, Mean: 114.03068681641
Channel 3, Standard Deviation: 66.965808411114
horse
/Users/me/Torch/install/bin/luajit: ...me/Torch/install/share/lua/5.1/nn/SpatialConvolution.lua:104: attempt to index field 'THNN' (a nil value)
stack traceback:
    ...me/Torch/install/share/lua/5.1/nn/SpatialConvolution.lua:104: in function 'updateOutput'
    /Users/me/Torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
    tutodl.lua:84: in main chunk
    [C]: in function 'dofile'
    ...me/Torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x01019ad810'''

See tutodl.txt for the complete code

How do I know the GPU is being used when I run the Deep Learning ... notebook?

I'm trying to run the Deep Learning demo notebook, and it's taking a really long time on the training. It also doesn't look like it's using the GPU. I'm on an Amazon EC2 g2.2xlarge with the NVIDIA Corporation GK104GL [GRID K520](rev a1). I tried some of the solutions here: karpathy/char-rnn#89, like

require 'cunn'
require 'cutorch'

and th -l cutorch and th -l cunn from the command line. However, when I run the line

trainer:train(trainset)

it just seems to sit there in progress and doesn't go anywhere. I also checked the GPU usage with nvidia-smi, and it looks like this:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.77                 Driver Version: 361.77                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   31C    P8    26W / 125W |    121MiB /  4036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      7379    C   /home/ubuntu/torch/install/bin/luajit          119MiB |
+-----------------------------------------------------------------------------+

It jumps up in memory usage and starts the PID after require cutorch, and the memory usage never increases after that. GPU-Util sits at 0%. I have CUDA installed; nvcc --version gives:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Wed_May__4_21:01:56_CDT_2016
Cuda compilation tools, release 8.0, V8.0.26

It's running on Ubuntu 16.04. I verified the samples are working, and CUDA isn't giving any errors.
Any ideas why it wouldn't be using the GPU?

Replace softmax

Hi everybody,
I'm testing this example (LeNet5) as shown in this tuto, with some changes it work very well. then I remove the softmax layer, and apply gaussian distribution to the output. For each iteration, I call net:forward(input), calculate the gaussian of all values in output and continue with criterion and backward. After some iterations, output becomes a nan vector.
Can anyone explain this behaviour ?
Thanks in advance

Nonlinearities missing between Linear layers in "Deep Learning with Torch: the 60-minute blitz"

Hi,
in Deep Learning with Torch: the 60-minute blitz a network is constructed which (I understand) is meant to imitate Lenet-5. However, there are no nonlinearities applied between the final linear layers:

net:add(nn.Linear(16*5*5, 120))             -- fully connected layer (matrix multiplication between input and weights)
net:add(nn.Linear(120, 84))
net:add(nn.Linear(84, 10))                   -- 10 is the number of outputs of the network (in this case, 10 digits)

which defeats the purpose of having multiple layers, as the result of stacking linear layers is still a linear function of the input. Even if the goal isn't to exactly replicate Lenet-5, omitting the nonlinearities entirely will probably be confusing to readers - after seeing this snippet, I thought that perhaps nn.Sequential applies some default nonlinearity after each layer, but that doesn't seem to be the case.

I'm still learning Torch, so I might be wrong, but I think that the following snippet would be better:

net:add(nn.Linear(16*5*5, 120))             -- fully connected layer (matrix multiplication between input and weights)
net:add(nn.Sigmoid())
net:add(nn.Linear(120, 84))
net:add(nn.Sigmoid())
net:add(nn.Linear(84, 10))                   -- 10 is the number of outputs of the network (in this case, 10 digits)

Any thoughts on this?

How is the file cifar10-train.t7 organise?

I downloaded the file cifar10-train.t7 for training. However, I do not know the structure of the file. The content of the file looks like:
screen shot 2016-04-11 at 14 55 16
I have some questions from this figure:

  1. I do not know where is to show the labels (for example, airplane', 'automobile', 'bird'... as the code mentioned), which numbers in the matrix show labels?.
  2. The file contains the information of 10,000 images, but, in this matrix, how to know identify the numbers which belong to the 1st image, 2nd image, 3rd image,...?
    Thank you very much in advance for your replies.

The tutorial Deep Learning with Torch has maybe issues

while trying something from the tutorial I noticed when defining the training data the code is:


-- ignore setmetatable for now, it is a feature beyond the scope of this tutorial. It sets the index operator.
setmetatable(trainset, 
    {__index = function(t, i) 
             return {t.data[i], t.label[i]} 
                end}
);
trainset.data = trainset.data:double() -- convert the data from a ByteTensor to a DoubleTensor.

function trainset:size() 
    return self.data:size(1) 
end

but that crashes with an stackoverflow error if you try to acces size()

It's cool

I'm sorry, i got it. An easy problem

Training error. Help

Hello.
I teach a neural network for two of my classes.
Error occurs at the stage of training.
How to fix it?

th> require 'nn';
th> trainset = torch.load('animals_peoples2.t7')
th> testset = torch.load('animals_peoples2.t7')
th> classes = {'animals', 'peoples'}

th> print(trainset)
{
data : ByteTensor - size: 17299x3x96x96
label : ByteTensor - size: 17299
}

th> print(#trainset.data)
17299
3
96
96
[torch.LongStorage of size 4]

th> setmetatable(trainset,
..> {__index = function(t, i)
..> return {
..> t.data[i],
..> t.label[i]
..> }
..> end}
..> );

th> function trainset:size()
..> return self.data:size(1)
..> end

th> trainset.data = trainset.data:double()

th> print(trainset:size())
17299

th> print(trainset[33])
{
1 : DoubleTensor - size: 3x96x96
2 : 1
}

th> redChannel = trainset.data:select(2, 1)

th> print(#redChannel)
17299
96
96
[torch.LongStorage of size 3]

th> mean = {} -- store the mean, to normalize the test set in the future

th> stdv = {} -- store the standard-deviation for the future

th> for i=1,3 do -- over each image channel
..> mean[i] = trainset.data:select(2, 1):mean() -- mean estimation
..> print('Channel ' .. i .. ', Mean: ' .. mean[i])
..> trainset.data:select(2, 1):add(-mean[i]) -- mean subtraction
..>
..> stdv[i] = trainset.data:select(2, i):std() -- std estimation
..> print('Channel ' .. i .. ', Standard Deviation: ' .. stdv[i])
..> trainset.data:select(2, i):div(stdv[i]) -- std scaling
..> end
Channel 1, Mean: 0
Channel 1, Standard Deviation: 0
Channel 2, Mean: nan
Channel 2, Standard Deviation: 0
Channel 3, Mean: nan
Channel 3, Standard Deviation: 0

th> net = nn.Sequential()
th> net:add(nn.SpatialConvolution(3, 6, 9, 9)) -- 3 input image channels, 6 output channels, 5x5 convolution kernel
th> net:add(nn.ReLU()) -- non-linearity
th> net:add(nn.SpatialMaxPooling(2,2,2,2)) -- A max-pooling operation that looks at 2x2 windows and finds the max.
th> net:add(nn.SpatialConvolution(6, 16, 9, 9))
th> net:add(nn.ReLU()) -- non-linearity
th> net:add(nn.SpatialMaxPooling(2,2,2,2))
th> net:add(nn.View(1699)) -- reshapes from a 3D tensor of 16x5x5 into 1D tensor of 1655
th> net:add(nn.Linear(1699, 120)) -- fully connected layer (matrix multiplication between input and weights)
th> net:add(nn.ReLU()) -- non-linearity
th> net:add(nn.Linear(120, 84))
th> net:add(nn.ReLU()) -- non-linearity
th> net:add(nn.Linear(84, 10)) -- 10 is the number of outputs of the network (in this case, 10 digits)
th> net:add(nn.LogSoftMax()) -- converts the output to a log-probability. Useful for classification problems

th> criterion = nn.ClassNLLCriterion()

th> trainer = nn.StochasticGradient(net, criterion)
th> trainer.learningRate = 0.001
th> trainer.maxIteration = 5 -- just do 5 epochs of training.

th> trainer:train(trainset)

trainer:train(trainset)

StochasticGradient: training

/root/facedetect/torch/install/share/lua/5.1/nn/THNN.lua:110: Assertion `THIndexTensor_(size)(target, 0) == batch_size' failed. at /tmp/luarocks_nn-scm-1-1625/nn/lib/THNN/generic/ClassNLLCriterion.c:50
stack traceback:
[C]: in function 'v'
/root/facedetect/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'ClassNLLCriterion_updateOutput'
...ect/torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:43: in function 'forward'
...ct/torch/install/share/lua/5.1/nn/StochasticGradient.lua:35: in function 'train'
[string "_RESULT={trainer:train(trainset)}"]:1: in main chunk
[C]: in function 'xpcall'
/root/facedetect/torch/install/share/lua/5.1/trepl/init.lua:661: in function 'repl'
...tect/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:199: in main chunk
[C]: at 0x004064f0

How to run "Deep Learning with Torch.ipynb"?

I tried running file Deep_Learning_with_Torch.ipynb using itorch notebook Deep_Learning_with_ Torch.ipynb and I also tried ipython notebook Deep_Learning_with_Torch.ipynb but I am getting 500 : Internal Server Error message.

Tutorial fails on the GPU part with 'cunn' - training with SGD

Hey, I was just trying the tutorial and the last part doesn't seem to work. Here is the error I got. Can someone help me?

I already tried reinstalling 'nn' and 'cunn' and also reinstall Torch altogether! Btw, can someone also tell me why we do that?

Channel 1, Mean: 125.83175029297
Channel 1, Standard Deviation: 63.143400842609
Channel 2, Mean: 123.26066621094
Channel 2, Standard Deviation: 62.369209019002
Channel 3, Mean: 114.03068681641
Channel 3, Standard Deviation: 66.965808411114
# StochasticGradient: training
# current error = 2.2234263599277
# current error = 1.88329374547
# current error = 1.6842083223224
# current error = 1.5661180615187
# current error = 1.4682321660876
# StochasticGradient: you have reached the maximum number of iterations
# training error = 1.4682321660876
/home/s43moham/torch/install/bin/luajit: /home/s43moham/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
/home/s43moham/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #3 to 'v' (cannot convert 'struct THCudaTensor *' to 'struct THDoubleTensor *')
stack traceback:
    [C]: in function 'v'
    /home/s43moham/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialConvolutionMM_updateOutput'
    ...am/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:96: in function <...am/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:92>
    [C]: in function 'xpcall'
    /home/s43moham/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
    ...e/s43moham/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
    torchCudaTutorial.lua:82: in main chunk
    [C]: in function 'dofile'
    ...oham/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00406670

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
    [C]: in function 'error'
    /home/s43moham/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
    ...e/s43moham/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
    torchCudaTutorial.lua:82: in main chunk
    [C]: in function 'dofile'
    ...oham/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00406670

How create data.t7 from imageset

Hello.

I have images in two folders:
each image - 96x96x3

animals
peoples

How create model in format .t7 from two classes?

I do for the neural network training differences between man and animal.
After creating a model, I plan to train on Deep Learning with Torch
Any reply will be important for me.

Running time

How long it takes for running that 60 minutes tutorial example? My CPU is intel i7 3.4 Hz 3.4 Hz. GPU is NVIDIA GeForce GTX 750 Ti.

Displays nothing

When I run graph.dot(mlp.fg, 'MLP','MLP'), it displays nothing in itorch notebook. Any suggestions how to fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.