Coder Social home page Coder Social logo

unsupgpu's People

Contributors

viorik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

unsupgpu's Issues

Zero padding issue

@viorik I was able to install the package (added installation instructions to the Wiki) and I am now trying to test it on a simple example, which is an adaptation of this example:
https://github.com/torch/demos/blob/master/train-autoencoder/train-autoencoder.lua
to work with the mnist dataset (32x32 image patches and 9x9 conv kernel).

I tested my code with unsup and it works OK.
Trying to port to unsupgpu I made some minor changes. Instead of using:
decoder = unsup.SpatialConvFistaL1(decodertable, kw, kh, iw, ih, params.lambda)
I am now using:
decoder = unsupgpu.SpatialConvFistaL1(decodertable, kw, kh, iw, ih, 0, 0, params.lambda,params.batchsize)
with 0,0 being the padding parameters. I also tried changing this to 4,4 to 2,2 and to 1,1. All result in an inconsistent tensor size error upon calling:
module:updateOutput(input, target)

using 0,0 results in the following error:

.../torch/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: inconsistent tensor size at /tmp/luarocks_torch-scm-1-7398/torch7/lib/TH/generic/THTensorCopy.c:7
stack traceback:
[C]: in function 'copy'
.../torch/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: in function 'updateOutput'
/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:51: in function 'f'
/home/torch/install/share/lua/5.1/optim/fista.lua:78: in function 'FistaLS'
/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput'
/home/torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput'
train-autoencoder-mnist.lua:271: in main chunk

using 4,4 results in the following error:

/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:116: inconsistent tensor size at /tmp/luarocks_torch-scm-1-7398/torch7/lib/TH/generic/THTensorCopy.c:7
stack traceback:
[C]: in function 'copy'
/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:116: in function 'updateOutput'
/home/torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput'
train-autoencoder-mnist.lua:271: in main chunk

here's the main file code:

----------------------------------------------------------------------
-- This script shows how to train autoencoders on natural images,
-- using the unsup package.
--
-- Borrowed from Koray Kavukcuoglu's unsup demos
--
-- In this script, we demonstrate the use of different types of
-- autoencoders. Learned filters can be visualized by providing the
-- flag -display.
--
-- Note: simple auto-encoders (with no sparsity constraint on the code) typically
-- don't yield filters that are visually appealing, although they might be
-- minimizing the reconstruction error correctly.
--
-- We demonstrate 2 types of auto-encoders:
--   * plain: regular auto-encoder
--   * predictive sparse decomposition (PSD): the encoder is trained
--     to predict an optimal sparse decomposition of the input
--
-- Both types of auto-encoders can use linear or convolutional
-- encoders/decoders. The convolutional version typically yields more
-- interesting, less redundant filters for images.
--
-- Koray Kavukcuoglu, Clement Farabet
----------------------------------------------------------------------

os.execute('clear')

require 'unsupgpu'
--require 'unsup'
require 'image'
require 'optim'
require 'autoencoder-data-mnist'

----------------------------------------------------------------------
-- parse command-line options
--
cmd = torch.CmdLine()

cmd:text()
cmd:text('Training a simple sparse coding dictionary on Berkeley images')
cmd:text()
cmd:text('Options')
-- general options:
cmd:option('-dir', 'outputs', 'subdirectory to save experiments in')
cmd:option('-seed', 1, 'initial random seed')
cmd:option('-threads', 2, 'threads')

-- for all models:
cmd:option('-model', 'conv-psd', 'auto-encoder class: linear | linear-psd | conv | conv-psd')
--cmd:option('-inputsize', 25, 'size of each input patch')
cmd:option('-inputsize', 32, 'size of each input patch')
cmd:option('-nfiltersin', 1, 'number of input convolutional filters')
cmd:option('-nfiltersout', 16, 'number of output convolutional filters')
cmd:option('-lambda', 1, 'sparsity coefficient')
cmd:option('-beta', 1, 'prediction error coefficient')
cmd:option('-eta', 2e-3, 'learning rate')
cmd:option('-batchsize', 1, 'batch size')
cmd:option('-etadecay', 1e-5, 'learning rate decay')
cmd:option('-momentum', 0, 'gradient momentum')
cmd:option('-maxiter', 100000, 'max number of updates')

-- use hessian information for training:
cmd:option('-hessian', true, 'compute diagonal hessian coefficients to condition learning rates')
cmd:option('-hessiansamples', 500, 'number of samples to use to estimate hessian')
cmd:option('-hessianinterval', 10000, 'compute diagonal hessian coefs at every this many samples')
cmd:option('-minhessian', 0.02, 'min hessian to avoid extreme speed up')
cmd:option('-maxhessian', 500, 'max hessian to avoid extreme slow down')

-- for conv models:
cmd:option('-kernelsize', 9, 'size of convolutional kernels')

-- logging:
--cmd:option('-datafile', 'http://torch7.s3-website-us-east-1.amazonaws.com/data/tr-berkeley-N5K-M56x56-lcn.ascii', 'Dataset URL')
cmd:option('-datafile', './mnist.t7/train_32x32.t7', 'Dataset URL')
cmd:option('-statinterval', 5000, 'interval for saving stats and models')
cmd:option('-v', true, 'be verbose')
cmd:option('-display', false, 'display stuff')
cmd:option('-wcar', '', 'additional flag to differentiate this run')
cmd:text()

arg = ""

params = cmd:parse(arg)

rundir = cmd:string('psd', params, {dir=true})
params.rundir = params.dir .. '/' .. rundir

if paths.dirp(params.rundir) then
   os.execute('rm -r ' .. params.rundir)
end
os.execute('mkdir -p ' .. params.rundir)
cmd:addTime('psd')
cmd:log(params.rundir .. '/log.txt', params)

torch.manualSeed(params.seed)

torch.setnumthreads(params.threads)

----------------------------------------------------------------------
-- load data
--
filename = paths.basename(params.datafile)
--if not paths.filep(filename) then
--   os.execute('wget ' .. params.datafile .. '; '.. 'tar xvf ' .. filename)
--end
dataset = getdata(filename, params.inputsize, 0.2)

if params.display then
   displayData(dataset, 100, 10, 2)
end

----------------------------------------------------------------------
-- create model
--
if params.model == 'linear' then

   -- params
   inputSize = params.inputsize*params.inputsize
   outputSize = params.nfiltersout

   -- encoder
   encoder = nn.Sequential()
   encoder:add(nn.Linear(inputSize,outputSize))
   encoder:add(nn.Tanh())
   encoder:add(nn.Diag(outputSize))

   -- decoder
   decoder = nn.Sequential()
   decoder:add(nn.Linear(outputSize,inputSize))

   -- complete model
   module = unsup.AutoEncoder(encoder, decoder, params.beta)

   -- verbose
   print('==> constructed linear auto-encoder')

elseif params.model == 'conv' then

   -- params:
   conntable = nn.tables.full(params.nfiltersin, params.nfiltersout)
   kw, kh = params.kernelsize, params.kernelsize
   iw, ih = params.inputsize, params.inputsize

   -- connection table:
   local decodertable = conntable:clone()
   decodertable[{ {},1 }] = conntable[{ {},2 }]
   decodertable[{ {},2 }] = conntable[{ {},1 }]
   local outputFeatures = conntable[{ {},2 }]:max()

   -- encoder:
   encoder = nn.Sequential()
   encoder:add(nn.SpatialConvolutionMap(conntable, kw, kh, 1, 1))
   encoder:add(nn.Tanh())
   encoder:add(nn.Diag(outputFeatures))

   -- decoder:
   decoder = nn.Sequential()
   decoder:add(nn.SpatialFullConvolutionMap(decodertable, kw, kh, 1, 1))

   -- complete model
   module = unsup.AutoEncoder(encoder, decoder, params.beta)

   -- convert dataset to convolutional (returns 1xKxK tensors (3D), instead of K*K (1D))
   dataset:conv()

   -- verbose
   print('==> constructed convolutional auto-encoder')

elseif params.model == 'linear-psd' then

   -- params
   inputSize = params.inputsize*params.inputsize
   outputSize = params.nfiltersout

   -- encoder
   encoder = nn.Sequential()
   encoder:add(nn.Linear(inputSize,outputSize))
   encoder:add(nn.Tanh())
   encoder:add(nn.Diag(outputSize))

   -- decoder is L1 solution
   decoder = unsup.LinearFistaL1(inputSize, outputSize, params.lambda)

   -- PSD autoencoder
   module = unsup.PSD(encoder, decoder, params.beta)

   -- verbose
   print('==> constructed linear predictive sparse decomposition (PSD) auto-encoder')

elseif params.model == 'conv-psd' then

   -- params:
   conntable = nn.tables.full(params.nfiltersin, params.nfiltersout)
   kw, kh = params.kernelsize, params.kernelsize
   iw, ih = params.inputsize, params.inputsize

   -- connection table:
   local decodertable = conntable:clone()
   decodertable[{ {},1 }] = conntable[{ {},2 }]
   decodertable[{ {},2 }] = conntable[{ {},1 }]
   local outputFeatures = conntable[{ {},2 }]:max()

   -- encoder:
   encoder = nn.Sequential()
   encoder:add(nn.SpatialConvolutionMap(conntable, kw, kh, 1, 1))
   encoder:add(nn.Tanh())
   encoder:add(nn.Diag(outputFeatures))

   -- decoder is L1 solution:
   --decoder = unsup.SpatialConvFistaL1(decodertable, kw, kh, iw, ih, params.lambda)
   decoder = unsupgpu.SpatialConvFistaL1(decodertable, kw, kh, iw, ih,0,0,params.lambda,params.batchsize)

   -- PSD autoencoder
   --module = unsup.PSD(encoder, decoder, params.beta)
   module = unsupgpu.PSD(encoder, decoder, params.beta)

   -- convert dataset to convolutional (returns 1xKxK tensors (3D), instead of K*K (1D))
   dataset:conv()

   -- verbose
   print('==> constructed convolutional predictive sparse decomposition (PSD) auto-encoder')

else
   print('==> unknown model: ' .. params.model)
   os.exit()
end

----------------------------------------------------------------------
-- trainable parameters
--

-- are we using the hessian?
if params.hessian then
   nn.hessian.enable()
   module:initDiagHessianParameters()
end

-- get all parameters
x,dl_dx,ddl_ddx = module:getParameters()

----------------------------------------------------------------------
-- train model
--

print('==> training model')

local avTrainingError = torch.FloatTensor(math.ceil(params.maxiter/params.statinterval)):zero()
local err = 0
local iter = 0

for t = 1,params.maxiter,params.batchsize do

   --------------------------------------------------------------------
   -- update diagonal hessian parameters
   --
   if params.hessian and math.fmod(t , params.hessianinterval) == 1 then
      -- some extra vars:
      local hessiansamples = params.hessiansamples
      local minhessian = params.minhessian
      local maxhessian = params.maxhessian
      local ddl_ddx_avg = ddl_ddx:clone(ddl_ddx):zero()
      etas = etas or ddl_ddx:clone()

      print('==> estimating diagonal hessian elements')
      for i = 1,hessiansamples do
         -- next
         local ex = dataset[i]
         local input = ex[1]
         local target = ex[2]
         module:updateOutput(input, target)

         -- gradient
         dl_dx:zero()
         module:updateGradInput(input, target)
         module:accGradParameters(input, target)

         -- hessian
         ddl_ddx:zero()
         module:updateDiagHessianInput(input, target)
         module:accDiagHessianParameters(input, target)

         -- accumulate
         ddl_ddx_avg:add(1/hessiansamples, ddl_ddx)
      end

      -- cap hessian params
      print('==> ddl/ddx : min/max = ' .. ddl_ddx_avg:min() .. '/' .. ddl_ddx_avg:max())
      ddl_ddx_avg[torch.lt(ddl_ddx_avg,minhessian)] = minhessian
      ddl_ddx_avg[torch.gt(ddl_ddx_avg,maxhessian)] = maxhessian
      print('==> corrected ddl/ddx : min/max = ' .. ddl_ddx_avg:min() .. '/' .. ddl_ddx_avg:max())

      -- generate learning rates
      etas:fill(1):cdiv(ddl_ddx_avg)
   end

   --------------------------------------------------------------------
   -- progress
   --
   iter = iter+1
   xlua.progress(iter, params.statinterval)

   --------------------------------------------------------------------
   -- create mini-batch
   --
   local example = dataset[t]
   local inputs = {}
   local targets = {}
   for i = t,t+params.batchsize-1 do
      -- load new sample
      local sample = dataset[i]
      local input = sample[1]:clone()
      local target = sample[2]:clone()
      table.insert(inputs, input)
      table.insert(targets, target)
   end

   --------------------------------------------------------------------
   -- define eval closure
   --
   local feval = function()
      -- reset gradient/f
      local f = 0
      dl_dx:zero()

      -- estimate f and gradients, for minibatch
      for i = 1,#inputs do
         -- f
         f = f + module:updateOutput(inputs[i], targets[i])

         -- gradients
         module:updateGradInput(inputs[i], targets[i])
         module:accGradParameters(inputs[i], targets[i])
      end

      -- normalize
      dl_dx:div(#inputs)
      f = f/#inputs

      -- return f and df/dx
      return f,dl_dx
   end

   --------------------------------------------------------------------
   -- one SGD step
   --
   sgdconf = sgdconf or {learningRate = params.eta,
                         learningRateDecay = params.etadecay,
                         learningRates = etas,
                         momentum = params.momentum}
   _,fs = optim.sgd(feval, x, sgdconf)
   err = err + fs[1]

   -- normalize
   if params.model:find('psd') then
      module:normalize()
   end

   --------------------------------------------------------------------
   -- compute statistics / report error
   --
   if math.fmod(t , params.statinterval) == 0 then

      -- report
      print('==> iteration = ' .. t .. ', average loss = ' .. err/params.statinterval)

      -- get weights
      eweight = module.encoder.modules[1].weight
      if module.decoder.D then
         dweight = module.decoder.D.weight
      else
         dweight = module.decoder.modules[1].weight
      end

      -- reshape weights if linear matrix is used
      if params.model:find('linear') then
         dweight = dweight:transpose(1,2):unfold(2,params.inputsize,params.inputsize)
         eweight = eweight:unfold(2,params.inputsize,params.inputsize)
      end

      -- render filters
      dd = image.toDisplayTensor{input=dweight,
                                 padding=2,
                                 nrow=math.floor(math.sqrt(params.nfiltersout)),
                                 symmetric=true}
      de = image.toDisplayTensor{input=eweight,
                                 padding=2,
                                 nrow=math.floor(math.sqrt(params.nfiltersout)),
                                 symmetric=true}

      -- live display
      if params.display then
         _win1_ = image.display{image=dd, win=_win1_, legend='Decoder filters', zoom=2}
         _win2_ = image.display{image=de, win=_win2_, legend='Encoder filters', zoom=2}
      end

      -- save stuff
      image.save(params.rundir .. '/filters_dec_' .. t .. '.jpg', dd)
      image.save(params.rundir .. '/filters_enc_' .. t .. '.jpg', de)
      torch.save(params.rundir .. '/model_' .. t .. '.bin', module)

      -- reset counters
      err = 0; iter = 0
   end
end

install error

During the installation, I got the following error.

-- Found Torch7 in /home/wangwei/utils/torch/install
CMake Error at /usr/share/cmake/Modules/FindCUDA.cmake:1428 (add_library):
add_library cannot create target "unsupgpu" because another target with the
same name already exists. The existing target is a module library created
in source directory "/tmp/luarocks_unsupgpu-0.1-0-800/unsupgpu". See
documentation for policy CMP0002 for more details.
Call Stack (most recent call first):
/home/wangwei/utils/torch/install/share/cmake/torch/TorchPackage.cmake:12 (CUDA_ADD_LIBRARY)
CMakeLists.txt:22 (ADD_TORCH_PACKAGE)

/nn/WeightedMSECriterion.lua:10: bad argument #1 to 'copy'

Hi, i am new to torch, I want to do unsupervised learning of images. The input image size is (38*_78) gray scale. the train dataset is a cuda tensor with size(1800_38*78). For the convolution layer, the kernel filter size is (7 7), padding 0, stride by default is 1. Code as following:
`

-- for all models:
cmd:option('-model', 'conv-psd', 'auto-encoder class: linear | linear-psd | conv | conv-psd')
cmd:option('-inputsizeX', 38, 'sizeX of each input patch')
cmd:option('-inputsizeY', 78, 'sizeY of each input patch')
cmd:option('-nfiltersin', 1, 'number of input convolutional filters')
cmd:option('-nfiltersout', 16, 'number of output convolutional filters')
cmd:option('-lambda', 1, 'sparsity coefficient')
cmd:option('-beta', 1, 'prediction error coefficient')
cmd:option('-eta', 2e-3, 'learning rate')
cmd:option('-batchsize', 1, 'batch size')
cmd:option('-etadecay', 1e-5, 'learning rate decay')
cmd:option('-momentum', 0.9, 'gradient momentum')
cmd:option('-maxiter', 10000, 'max number of updates')

-- use hessian information for training:
cmd:option('-hessian', true, 'compute diagonal hessian coefficients to condition learning rates')
cmd:option('-hessiansamples', 500, 'number of samples to use to estimate hessian')
cmd:option('-hessianinterval', 10000, 'compute diagonal hessian coefs at every this many samples')
cmd:option('-minhessian', 0.02, 'min hessian to avoid extreme speed up')
cmd:option('-maxhessian', 500, 'max hessian to avoid extreme slow down')

-- for conv models:
cmd:option('-kernelsize', 7, 'size of convolutional kernels')

-- logging:
cmd:option('-statinterval', 5000, 'interval for saving stats and models')
cmd:option('-v', false, 'be verbose')
cmd:option('-display', false, 'display stuff')
cmd:option('-wcar', '', 'additional flag to differentiate this run')
cmd:text()

params = cmd:parse(arg)

rundir = cmd:string('psd', params, {dir=true})
params.rundir = params.dir .. '/' .. rundir

if paths.dirp(params.rundir) then
   os.execute('rm -r ' .. params.rundir)
end
os.execute('mkdir -p ' .. params.rundir)
cmd:addTime('psd')
cmd:log(params.rundir .. '/log.txt', params)

torch.setdefaulttensortype('torch.FloatTensor')

cutorch.setDevice(1) -- by default, use GPU 1
torch.manualSeed(params.seed)
local statinterval = torch.floor(params.statinterval / params.batchsize)*params.batchsize
local hessianinterval = torch.floor(params.hessianinterval / params.batchsize)*params.batchsize
print (statinterval)
print (hessianinterval)


--torch.manualSeed(params.seed)

torch.setnumthreads(params.threads)

----------------------------------------------------------------------
-- load data
dataset = torch.load('/torch/extra/unsupgpu/src/train.t7')

----------------------------------------------------------------------
-- create model
   local conntable = nn.tables.full(params.nfiltersin, params.nfiltersout)
   local kw, kh = params.kernelsize, params.kernelsize
   local W,H = params.inputsizeX, params.inputsizeY
   --local padw, padh = torch.floor(params.kernelsize/2.0), torch.floor(params.kernelsize/2.0)
   local padw, padh = 0, 0
   local batchSize = params.batchsize or 1
   -- connection table:
   local decodertable = conntable:clone()
   decodertable[{ {},1 }] = conntable[{ {},2 }]
   decodertable[{ {},2 }] = conntable[{ {},1 }] 
   local outputFeatures = conntable[{ {},2 }]:max()
   local inputFeatures = conntable[{ {},1 }]:max()

   -- encoder:
   encoder = nn.Sequential()
   encoder:add(nn.SpatialConvolution(inputFeatures,outputFeatures, kw, kh, 1, 1, padw, padh))
   encoder:add(nn.Tanh())

   encoder:add(nn.Diag(outputFeatures))

   -- decoder is L1 solution:
   print(kw, kh, W, H, padw, padh, params.lambda, batchSize) 
   print(decodertable)
   decoder = unsupgpu.SpatialConvFistaL1(decodertable, kw, kh, W, H, padw, padh, params.lambda, batchSize)

   -- PSD autoencoder
   module = unsupgpu.PSD(encoder, decoder, params.beta)

   module:cuda()
   -- convert dataset to convolutional (returns 1xKxK tensors (3D), instead of K*K (1D))
   --dataset:conv()

   -- verbose
   print('==> constructed convolutional predictive sparse decomposition (PSD) auto-encoder')

----------------------------------------------------------------------
-- trainable parameters
--

-- are we using the hessian?
if params.hessian then
   nn.hessian.enable()
   module:initDiagHessianParameters()
end

-- get all parameters
x,dl_dx,ddl_ddx = module:getParameters()

----------------------------------------------------------------------
-- train model
--

print('==> training model')

local avTrainingError = torch.FloatTensor(math.ceil(params.maxiter/params.statinterval)):zero()
local err = 0
local iter = 0

for t = 1,params.maxiter,params.batchsize do

   --------------------------------------------------------------------
   -- update diagonal hessian parameters
   --
   if params.hessian and math.fmod(t , hessianinterval) == 1 then
      -- some extra vars:
      local batchsize = params.batchsize
      local hessiansamples = params.hessiansamples
      local minhessian = params.minhessian
      local maxhessian = params.maxhessian
      local ddl_ddx_avg = ddl_ddx:clone(ddl_ddx):zero()
      etas = etas or ddl_ddx:clone()

      print('==> estimating diagonal hessian elements')

      for ih = 1,hessiansamples,batchsize do
        print ('==>')
        print (ih)
        local inputs  = torch.Tensor(params.batchsize,params.nfiltersin,params.inputsizeX,params.inputsizeY)
        local targets = torch.Tensor(params.batchsize,params.nfiltersin,params.inputsizeX,params.inputsizeY)
        for i = ih,ih+batchsize-1 do
          -- next
          local input  = dataset.data[i]
          --input:resize(torch.CudaTensor(1,3*96,160))
          local target = dataset.data[i]
          --target:resize(torch.CudaTensor(1,3*96,160))
          inputs[{i-ih+1,{},{},{}}] = input
          targets[{i-ih+1,{},{},{}}] = target
        end

        local inputs_ = inputs:cuda()
        local targets_ = targets:cuda()
        print (#inputs_)
        print (#targets_)
        print ("==> equal")
        print (torch.all(torch.eq(inputs_, targets_)))
        --print (torch.eq(inputs_, targets_))
        --module:updateGradInput(inputs_, targets_)

`
At the end of the code, we upddatGradInput I get the size mismatch problem.

/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: bad argument #1 to 'copy' (sizes do not match at torch/extra/cutorch/lib/THC/generic/THCTensorCopy.cu:10) stack traceback: [C]: in function 'copy' ...ch/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: in function 'updateOutput' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:51: in function 'f' .../deguoxi/torch/install/share/lua/5.1/optim/fista.lua:83: in function 'FistaLS' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput' .../torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput' main.lua:182: in main chunk
I check the size equal of inputs_ and targets_, and they are equal. Any clue to solve this problem?

require 'unsupgpu' failed

I try to run the demo lua file. I follow the installation construction also. Before luarocks install unsupgpu-0.1-0.rockspec I also usecmakeandmake installto install. But the require 'unsupgpu'failed. Anybody has any clue?
lua: demo.lua:8: module 'unsupgpu' not found: no field package.preload['unsupgpu']

install issue on mac

The CMakeLists.txt file doesnt seem to work correctly for mac, as -Wl,-export-dynamic is not liked by osx. A simple solution I found was by using the CMake and rocks file from cutorch and modifying them. I can give you a PR if you'd like.

shrinkagegpu cuda command unrecognized

@viorik Hi,
I am now getting the following error:

/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:79: attempt to call method 'shrinkagegpu' (a nil value)
stack traceback:
/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua: in function 'pl'
/home/torch/install/share/lua/5.1/optim/fista.lua:95: in function 'FistaLS'
/home/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput'
/home/torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput'
train-autoencoder-mnist.lua:274: in main chunk

It appears that this cuda command is not recognized.
BTW, the input to
code:shrinkagegpu(self.lambda/L)
in this case is:
code - torch.DoubleTensor of size 16x32x32
self.lambda - 1
L - 0.1

fista.lua:61: bad argument #1 to 'resizeAs' (torch.FloatTensor expected, got torch.CudaTensor)

running the demo_conv_psd_gpu.lua or my own code I get the following error. Seems like it still uses optim.fista in line 119

local code, h = optim.FistaLS(self.f, self.g, self.pl, self.code, self.params)

error:

==> constructed convolutional predictive sparse decomposition (PSD) auto-encoder
==> training model
==> estimating diagonal hessian elements
XXX/torch/install/share/lua/5.1/optim/fista.lua:61: bad argument #1 to 'resizeAs' (torch.FloatTensor expected, got torch.CudaTensor)
stack traceback:
    [C]: at 0x010e1851d0
    [C]: in function 'resizeAs'
    /Users/XXX/torch/install/share/lua/5.1/optim/fista.lua:61: in function 'FistaLS'
    .../XXX/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput'
    /Users/XXX/torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput'
    demo_conv_psd_gpu.lua:186: in main chunk

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.