Coder Social home page Coder Social logo

mocha.jl's People

Contributors

aamini avatar adambrewster avatar antholzer avatar benmoran avatar bisraelsen avatar carlolucibello avatar chezou avatar credentiality avatar droidicus avatar greenflash1357 avatar iizukak avatar maximsch2 avatar musm avatar nstiurca avatar pcmoritz avatar pluskid avatar r9y9 avatar rdeits avatar rened avatar riwsky avatar rofinn avatar slundberg avatar steven-varga avatar stjanovitz avatar stokasto avatar the-moliver avatar uschmidt83 avatar vyp avatar zacsketches avatar zhmz90 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mocha.jl's Issues

Out of memory error

I'm constantly getting "out of memory" error while doing a grid search. It seems like Mocha maps tons of memory for GPU backend. If I'm using a CPU backend everything is fine and memory usage is low.
screenshot 2015-08-31 11 39 29

Is this a bug? Is there any workaround?

julia> Pkg.status()
12 required packages:
 - CUBLAS                        0.0.1
 - CUDA                          0.1.0
 - CUDArt                        0.1.3
 - CUFFT                         0.0.3
 - Cairo                         0.2.29
 - Colors                        0.5.2
 - DataFrames                    0.6.9
 - HttpParser                    0.0.13
 - IJulia                        0.2.5
 - Images                        0.4.46
 - MLBase                        0.5.1
 - Mocha                         0.0.9+             master
30 additional packages:
 - ArrayViews                    0.6.3
 - BinDeps                       0.3.15
 - Blosc                         0.1.4
 - ColorTypes                    0.1.3
 - ColorVectorSpace              0.0.2
 - Compat                        0.6.0
 - DataArrays                    0.2.18
 - Dates                         0.3.2
 - Docile                        0.5.16
 - FixedPointNumbers             0.0.10
 - GZip                          0.2.17
 - Graphics                      0.1.0
 - HDF5                          0.5.5
 - HttpCommon                    0.1.2
 - Iterators                     0.1.8
 - JLD                           0.5.4
 - JSON                          0.4.5
 - Logging                       0.1.1
 - Nettle                        0.1.10
 - REPLCompletions               0.0.3
 - Reexport                      0.0.2
 - SHA                           0.1.1
 - SIUnits                       0.0.5
 - SortingAlgorithms             0.0.5
 - StatsBase                     0.7.1
 - StatsFuns                     0.1.2
 - TexExtensions                 0.0.2
 - URIParser                     0.0.7
 - ZMQ                           0.2.0
 - Zlib                          0.1.9

julia> versioninfo()
Julia Version 0.3.11
Commit 483dbf5 (2015-07-27 06:18 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: liblapack.so.3
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

Getting error: could not create file mapping

Any hints on the following error?

julia> include("mnist.jl")
22-Dec 16:14:40:INFO:root:Constructing net MNIST-train on CPUBackend...
22-Dec 16:14:40:INFO:root:Topological sorting 8 layers...
22-Dec 16:14:40:INFO:root:Setup layers...
ERROR: could not create file mapping: Access is denied.
 in error at error.jl:21
 in mmap_array at mmap.jl:144
 in readmmap at C:\Users\Harsh\.julia\v0.3\HDF5\src\plain.jl:1411
 in readmmap at C:\Users\Harsh\.julia\v0.3\HDF5\src\plain.jl:1418
 in map at base.jl:189
 in HDF5DataLayerState at C:\Users\Harsh\.julia\v0.3\Mocha\src\layers/hdf5-data.
jl:45
 in setup at C:\Users\Harsh\.julia\v0.3\Mocha\src\layers/hdf5-data.jl:73
 in setup at C:\Users\Harsh\.julia\v0.3\Mocha\src\layers.jl:107
 in Net at C:\Users\Harsh\.julia\v0.3\Mocha\src\net.jl:166
 in include at boot.jl:245
 in include_from_node1 at loading.jl:128
while loading C:\Users\Harsh\Desktop\Kaggle\Julia\CNN\mnist\mnist.jl, in express
ion starting on line 22

could not find function cudnnSetFilter4dDescriptor

Hello,

I tried to run mnist.jl.

I got the following message:
20- 1ๆœˆ 20:37:16:INFO:root:Configuring Mocha...
20- 1ๆœˆ 20:37:16:INFO:root: * CUDA enabled (MOCHA_USE_CUDA environment variable detected)
20- 1ๆœˆ 20:37:16:INFO:root: * Native Ext disabled by default
20- 1ๆœˆ 20:37:16:INFO:root:Mocha configured, continue loading module...
20- 1ๆœˆ 20:37:19:INFO:root:Initializing CuDNN backend...
20- 1ๆœˆ 20:37:19:INFO:root:CuDNN backend initialized!
20- 1ๆœˆ 20:37:20:INFO:root:Constructing net MNIST-train on GPUBackend...
20- 1ๆœˆ 20:37:20:INFO:root:Topological sorting 8 layers...
20- 1ๆœˆ 20:37:20:INFO:root:Setup layers...
ERROR: ccall: could not find function cudnnSetFilter4dDescriptor in library libcudnn
in set_filter_descriptor at /home/rudy/.julia/v0.3/Mocha/src/cuda/cudnn.jl:41
while loading /home/rudy/.julia/v0.3/Mocha/examples/mnist/mnist.jl, in expression starting on line 22

many thanks for helping

LRPolicy.DecayOnValidation causes error if used with SquareLossLayer

Great package, thanks for all your work!

I tried to use the DecayOnValidation learning rate policy together with a SquareLoss and have two questions/bugs:

  • I get the error message policy not defined. See this Gist for an minimal example. Is my setup correct?
  • As I understand the this code, the learning rate would only drop when the loss decreases, which is not what we want for a SquareLoss.

No simple way to store the outputs of a network

I am working on a network that processes audio files (in a frame-by-frame basis) and would like to listen to the outputs. However, there is no easy way to get a snapshot of a model and extract the output of a forward pass for a single sample (or for a batch of samples). I am going to use something similar to this gist, but that is overly complicated and has to be done for each network separately.

How and what could we implement something to make this task easier?

Implementation of RNN

So I want to work on adding RNN functionality mainly to help myself understand them better and to do something of a larger scale in Julia! I did want to open this issue though so that there would be a forum for discussion about implementation.

Here are my current thoughts, I don't know if they're consistent with Mocha's architecture, or even with the principles of RNN's as I only spent a little time getting acquainted but here goes. Please point out any of my misunderstandings!

RNN Specific Stuff

  • Not strictly forward and back
  • Backprop is unrolled through time instead, which essentially means a final "equivalent" FF net of varying sizes dependent on the number of time steps to backprop
  • LSTM to prevent exploding/vanishing gradients

Topology of an RNN in Mocha

To my understanding, there are split layers which allow a layer's output to be sent to two different layers and still be able to play nice with backprop. An RNN implementation would likely need to use this. Additionally, would something like a join layer be necessary?

Caffe

I think BVLC/caffe#1873 is the relevant thread from Caffe.

If I'm understanding correctly, one of the inputs to a recurrent layer is a stream that represents the past states of that layer. Understandably, the forward prop is exact as it only depends on the current value of an input layer and the most recent past value, presumably stored at one end of the stream. He mentions, however, that the back prop is approximate. This is the part I don't understand at all, how is the backprop being approximated?

Thanks for reading!

cuDNN unavailable

Hello,

I want to use Mocha with a GPU backend, however the CUDA version that is installed on the system is 5.0 which does not support cuDNN.
I have seen this issue #52 and the only problem with my network is that it uses sigmoid neurons and a softmax layer, which need cuDNN.Is there a workaround to make it work even if computation with sigmoids and softmax becomes less efficient, eg make it use a CPU backend for those?

Thanks in advance for the help!

Channelpoolinglayer description

I am just going through the docs, and this part is perhaps already outdated as the ND-tensors are already implemented?

This is called channel pooling layer because it is designed to pool over the channel dimension when Mocha can only handle 4D tensors. For general ND-tensors, the โ€œchannelโ€ dimension no longer has a specific semantic, and could be specified by the user.

I find "can only handle 4D" confusing - it can do ND now, can't it?
Thanks!

Data I/O

hi - i am new to the mocha package but am very intrigued. i've successfully run several of your examples and read through the documentation and am very impressed w/the work. now, i am hoping to try some of my own data. much of my data is stored in the form exemplified here:

https://github.com/ntustison/KapowskiChronicles/blob/master/analytics2/labelresultsANTsOVolume.csv

for this file, we might want to use columns 5 through ncol(data) to predict column 4 ( age ) or column 3 (sex)

what i am hoping is that you could show me how to map this data into a format that could be used with Mocha. e.g. via HDF5DataLayer ... i have additional data that is in the form of images and usually employ R for learning tasks. so, ultimately, i would like to be able to do the same with images ... fyi, i have read through the examples/mnist scripts but was hoping that you could help with the case outlined above: i.e. how to take a csv file and use its entries to do train / test w/Mocha.

thanks again for this work!

Tests for padded-copy.jl are failing

The tests fail with the log output down below. I would like to potentially replace some of my research code with Mocha and need padding for it :) if you want I can have a look at this but maybe you already caught it.
Oh btw I did print the arrays and it seems that weirdly enough the first two channels are correctly padded output while the rest of the entries are too large by an order of magnitude.

dense2padded! mirror = false
ERROR: test failed: all(abs(got - orig) .< eps)
 in error at error.jl:21
 in default_handler at test.jl:19
 in do_test at test.jl:39
 in test_padded_copy at /home/springj/.julia/v0.3/Mocha/test/cuda/padded-copy.jl:17
 in test_padded_copy at /home/springj/.julia/v0.3/Mocha/test/cuda/padded-copy.jl:29
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354

could not load module libcuda: dlopen(libcuda.dylib, 1): image not found

I am able to compile and execute all CUDA samples on my machines and successfully built kernels.ptx

but when I try to startup the backend I get the following error:

22-Nov 13:55:25:INFO:root:Initializing CuDNN backend...
error compiling init: could not load module libcuda: dlopen(libcuda.dylib, 1): image not found
while loading In[9], in expression starting on line 2

in init at /Users/arshakn/.julia/v0.3/Mocha/src/cuda/backend.jl:131
in init at /Users/arshakn/.julia/v0.3/Mocha/src/system.jl:11

nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2014 NVIDIA Corporation
Built on Mon_Oct_20_21:28:13_CDT_2014
Cuda compilation tools, release 6.5, V6.5.24

versioninfo(true)

Julia Version 0.3.2

Commit 21d5433* (2014-10-21 20:18 UTC)

Platform Info:

System: Darwin (x86_64-apple-darwin13.3.0)

CPU: Intel(R) Core(TM) i7-3635QM CPU @ 2.40GHz

WORD_SIZE: 64

uname: Darwin 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64 i386

Memory: 16.0 GB (10326.09375 MB free)

Uptime: 394005.0 sec

Load Avg: 2.4765625 1.91796875 1.77685546875

Intel(R) Core(TM) i7-3635QM CPU @ 2.40GHz:

   speed         user         nice          sys         idle          irq

#1 2400 MHz 86129 s 0 s 76973 s 800226 s 0 s
#2 2400 MHz 6524 s 0 s 3337 s 953296 s 0 s
#3 2400 MHz 57648 s 0 s 33256 s 872255 s 0 s
#4 2400 MHz 5638 s 0 s 2547 s 954971 s 0 s
#5 2400 MHz 48554 s 0 s 26602 s 888002 s 0 s
#6 2400 MHz 5108 s 0 s 2290 s 955757 s 0 s
#7 2400 MHz 41670 s 0 s 22129 s 899358 s 0 s
#8 2400 MHz 4721 s 0 s 2109 s 956325 s 0 s

BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)

LAPACK: libopenblas

LIBM: libopenlibm

LLVM: libLLVM-3.3

Environment:

GEM_HOME = /Users/arshakn/.rvm/gems/ruby-2.0.0-p247

TERM = xterm-256color

D4M_HOME = /Users/arshakn/d4m_api

MY_RUBY_HOME = /Users/arshakn/.rvm/rubies/ruby-2.0.0-p247

PATH = /Applications/Julia-0.3.2.app/Contents/Resources/julia/bin:/Applications/Julia-0.3.2.app/Contents/Resources/julia/libexec/git-core:/Users/arshakn/anaconda/bin:/Users/arshakn/google-cloud-sdk/bin:/opt/local/bin:/opt/local/sbin:/Users/arshakn/orca/bin:/opt/local/bin:/usr/local/spark-0.8.0-incubating:/Developer/NVIDIA/CUDA-6.5/bin:/Users/arshakn/.rvm/gems/ruby-2.0.0-p247/bin:/Users/arshakn/.rvm/gems/ruby-2.0.0-p247@global/bin:/Users/arshakn/.rvm/rubies/ruby-2.0.0-p247/bin:/Users/arshakn/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Applications/domino:/usr/local/go/bin

HOME = /Users/arshakn

DYLD_LIBRARY_PATH = /Developer/NVIDIA/CUDA-6.5/lib:

GEM_PATH = /Users/arshakn/.rvm/gems/ruby-2.0.0-p247:/Users/arshakn/.rvm/gems/ruby-2.0.0-p247@global

FONTCONFIG_PATH = /Applications/Julia-0.3.2.app/Contents/Resources/julia/etc/fonts

GIT_EXEC_PATH = /Applications/Julia-0.3.2.app/Contents/Resources/julia/libexec/git-core

Accuracy computation

Hi,

It seems to me that the line
pred[w,h,int(label[w,h,1,n])+1,n] >= maximum(pred[w, h, 1:channels, n])
contains a potential bug if the prediction scores are, for instance, identically zero.

Note that this is very unlikely to happen after training starts, but it might be nice to fix just in case.

It might be better to compute an argmax of pred and then check that it is equal to the correct label.

-Nick

Don't load solver state

Forgive me if I've missed this, but Snapshot had an also_load_solver_state-flag which could be set to false so that only network weights are loaded. With the newest version it seems like it's only possible to load both weights and the solver state with the load_from parameter in SolverParameters. Is there still/will there be a way to ignore the solver state when loading a snapshot?

ConcatLayer

Currently there's a way to split one blob into many, but as far as I could tell no way to join two or more blobs again. I believe what I'm looking for is something equivalent to Caffe's ConcatLayer (http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ConcatLayer.html#details).

My personal use for this would be to take the max of multiple blobs in the same way as the ChannelPoolingLayer does currently over the channel dimension so anything that solves that works for me, but I think a ConcatLayer-type thing would be the most general solution.

Are there plans on supporting this? I'd be happy to try to contribute myself but I'm currently slightly scared of the layer definitions so I might need some help with it in that case.

Memory data layer

Hi Chiyuan, thanks for the great work. I have read the documentation and tried the tutorial example and works fine. However, I am not able to use Mocha with my own data. I cant figure out how to use save my data in the hdf5 format for non image data. I only need need to use Memory data layer. Can some post a simple example how to feed my data to the Memory Layer? Say I am using an array X_train which is NxM dimensions where N is the number of samples and M is the number of features. Also I have a Y_train Nx1 array which holds the labels for the N samples. How can I feed my data to the Memory Layer? Does any tutorial exist? I cant figure it out from the documentation.

Thanks

CuDNN Arch mismatch

Hi,

I'm not sure if this is my problem or not but I am not sure where to go from here. I am getting the following error when trying to initialize the GPU backend.

julia>backend = GPUBackend()
GPUBackend
julia>init(backend)
16-Mar 23:11:13:INFO:root:Initializing CuDNN backend...
ERROR: Arch mismatch
 in create at /home/b/.julia/v0.3/Mocha/src/cuda/cudnn.jl:43
 in init at /home/b/.julia/v0.3/Mocha/src/cuda/backend.jl:139

I've searched around for "Arch mismatch" online and can't find anything. Any guidance would be great. Thanks!

convolutional autoencoders

I would find it useful to have an example of convolutional auto-encoder. Would a tied max un-pooling layer be necessary, using switches from its twin pooling layer to keep track of argmax'es (as in Zeiler's data viz paper)? And possibly a tied deconv layer ? How would layer-wise greedy training be performed to initialize a deep CNN such as the one used for CIFAR-10 ?

Build problem

Hello,

I am trying to install Mocha and have hit a brick wall with both the mainline version and the dev version. Here is the error.

in hist_from_file at REPL.jl:330
in setup_interface at REPL.jl:719
in run_frontend at REPL.jl:837
in run_repl at REPL.jl:165
in _start at client.jl:451

INFO: Disabling history file for this session.
julia> Pkg.rm("Mocha")
INFO: No packages to install, update or remove
INFO: Package database updated

julia> Pkg.clone("https://github.com/pluskid/Mocha.jl.git")
INFO: Cloning Mocha from https://github.com/pluskid/Mocha.jl.git
ERROR: Mocha already exists
in error at error.jl:21

julia> Pkg.rm("Mocha")
INFO: Removing Mocha (unregistered)

julia> Pkg.clone("https://github.com/pluskid/Mocha.jl.git")
INFO: Cloning Mocha from https://github.com/pluskid/Mocha.jl.git
INFO: Computing changes...
INFO: No packages to install, update or remove
INFO: Package database updated

julia> Pkg.build("Mocha")
INFO: Building WinRPM
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1//rep
odata/repomd.xml
WARNING: convert(::Type{Ptr}, ::Int64) methods should be converted to be methods of unsafe_convert
in depwarn at deprecated.jl:62
in unsafe_convert at deprecated.jl:381
in download at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:49
in cacheget at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:148
in update at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:159
in update at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:126
in include at boot.jl:252
in include_from_node1 at loading.jl:133
in evalfile at loading.jl:175 (repeats 2 times)
in anonymous at pkg/entry.jl:628
in cd at file.jl:32
in build! at pkg/entry.jl:627
in build! at pkg/entry.jl:622 (repeats 2 times)
in build at pkg/entry.jl:639
in anonymous at pkg/dir.jl:31
in cd at file.jl:32
in cd at pkg/dir.jl:31
in build at pkg.jl:64
while loading C:\Users\sgt101.julia\v0.4\WinRPM\deps\build.jl, in expression starting on line 2
WARNING: convert(::Type{Ptr}, ::Int64) methods should be converted to be methods of unsafe_convert
in depwarn at deprecated.jl:62
in unsafe_convert at deprecated.jl:381
in download at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:49
in cacheget at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:148
in update at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:159
in update at C:\Users\sgt101.julia\v0.4\WinRPM\src\WinRPM.jl:126
in include at boot.jl:252
in include_from_node1 at loading.jl:133
in evalfile at loading.jl:175 (repeats 2 times)
in anonymous at pkg/entry.jl:628
in cd at file.jl:32
in build! at pkg/entry.jl:627
in build! at pkg/entry.jl:622 (repeats 2 times)
in build at pkg/entry.jl:639
in anonymous at pkg/dir.jl:31
in cd at file.jl:32
in cd at pkg/dir.jl:31
in build at pkg.jl:64
while loading C:\Users\sgt101.julia\v0.4\WinRPM\deps\build.jl, in expression starting on line 2
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1//rep
odata/repomd.xml
INFO: Building Blosc
INFO: Building HDF5
INFO: Updating WinRPM package list
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1//rep
odata/repomd.xml
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1//rep
odata/repomd.xml
INFO: Building Mocha
Running g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp
==========================================[ ERROR: Mocha ]==========================================

LoadError: could not spawn g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp poolin g.cpp: no such file or directory (ENOENT)
while loading C:\Users\sgt101.julia\v0.4\Mocha\deps\build.jl, in expression starting on line 23

==========================================[ BUILD ERRORS ]==========================================

WARNING: Mocha had build errors.

  • packages with build errors remain installed in C:\Users\sgt101.julia\v0.4
  • build the package(s) and all dependencies with Pkg.build("Mocha")
  • build a single package by running its deps/build.jl script

julia>

n_filter parameter to ConvolutionLayer

This is more of a documentation issue:

At: http://mochajl.readthedocs.org/en/latest/user-guide/layers/computation-layer.html

Under ConvolutionLayer I see:

n_filter
    Default 1. Number of filters.

Can you explain what 'Number of filters' means here? Perhaps include a picture in the doc? I'm not quite sure what the term 'filter' means here. There's a kernel parameter which takes a tuple to indicate the dimensions of the kernel. Then that kernel gets moved across the image (by the stride, I think). Does n_filter indicate the number of kernels that are being walked across the image?

How make prediction?

Hi guys. I'm newbie in julia and mocha. I use examples and train my model. but in my task i have multiple output. for example in input image 32 x 32 on output image 12 x 12. training is ok. but i dont know how i may use my net. i load snapshot of trained model...but what i need to do next? in caffe i use method forward(forward_all). how i can froward my image through net?

Thanks.

Stacked Denoising Auto-encoders

(Deep) auto-encoders and variants are important ways of doing unsupervised deep learning. Since they share some properties with current Mocha architecture, it would be good to add some auto-encoders to Mocha.

  • Denoising Auto-encoders (#28)
  • Stacked auto-encoders, an interface to do layer-wise training (#31, #32)
  • Tutorial documentation (#35)

Constantly getting BoundsError()

Hi, I can't find the reason of this error while running the solver:

13-Aug 18:59:39:DEBUG:root:Checking network topology for back-propagation
13-Aug 18:59:39:DEBUG:root:Init network SV-train
13-Aug 18:59:39:DEBUG:root:Init parameter weight for layer fc1
13-Aug 18:59:39:DEBUG:root:Init parameter bias for layer fc1
13-Aug 18:59:39:DEBUG:root:Init parameter weight for layer fc2
13-Aug 18:59:39:DEBUG:root:Init parameter bias for layer fc2
13-Aug 18:59:39:DEBUG:root:Init parameter weight for layer out
13-Aug 18:59:39:DEBUG:root:Init parameter bias for layer out
13-Aug 18:59:39:DEBUG:root:Initializing coffee breaks
13-Aug 18:59:39:INFO:root:Snapshot directory snapshots_dropout_fc already exists
13-Aug 18:59:39:DEBUG:root:Init network SV-test
13-Aug 18:59:39:INFO:root:ITER = 000000:: TRAIN obj-val = 4.12713432
13-Aug 18:59:39:INFO:root:Saving snapshot to snapshot-000000.jld...
13-Aug 18:59:39:DEBUG:root:Saving parameters for layer fc1
13-Aug 18:59:39:DEBUG:root:Saving parameters for layer fc2
13-Aug 18:59:39:DEBUG:root:Saving parameters for layer out
13-Aug 18:59:44:INFO:root:
13-Aug 18:59:44:INFO:root:## Performance on Validation Set after 0 iterations
13-Aug 18:59:44:INFO:root:---------------------------------------------------------
13-Aug 18:59:44:INFO:root:  Accuracy (avg over 1570) = 0.0000%
13-Aug 18:59:44:INFO:root:---------------------------------------------------------
13-Aug 18:59:44:INFO:root:
13-Aug 18:59:44:DEBUG:root:Entering solver loop
BoundsError()
while loading In[18], in expression starting on line 1

 in checkbounds at abstractarray.jl:62
 in checkbounds at abstractarray.jl:70
 in broadcast_getindex! at broadcast.jl:244
 in broadcast_getindex at broadcast.jl:241
 in forward at /Users/quetzal/.julia/v0.3/Mocha/src/layers/multinomial-logistic-loss.jl:102
 in forward at /Users/quetzal/.julia/v0.3/Mocha/src/layers/softmax-loss.jl:46
 in forward at /Users/quetzal/.julia/v0.3/Mocha/src/net.jl:148
 in solve at /Users/quetzal/.julia/v0.3/Mocha/src/solvers.jl:355

The network arch is very similar to the mnist_dropout_fc.jl example, except I'm using different dataset.
You can reproduce this issue from this notebook: https://github.com/nikolaypavlov/julia-dl/blob/master/Julia-DL.ipynb

every_n_epoch has weird behavior

When I looked at the coffee-break code I realized that the every_n_epoch parameter has weird behavior, as setting it will result in the break being called in each iteration of the respective episode.
I would guess what you actually wanted (and what would make sense to me) was to only call the break once at the beginning of the epoch (so that every_n_epoch = 1 is equivalent to every_n_iter = 600 for mnist).
Unfortunately this requires getting more information from the Datalayer as we have to track at which step in the epoch we are.
What do you think ?

Shuffled sequential access in HDF5DataLayer

Currently HDF5DataLayer has a limitation as it is not able to shuffle data if the HDF file is chunked or compressed. In Pylearn2, there is a special mode for accessing data called "shuffled sequential", which basically iterates sequentially over a shuffled version of the dataset. Even though shuffling the a whole chunked/compressed dataset is infeasible, I think we could access chunks in a random order and also shuffle the contents inside each chunk, given that chunks are larger than the batch size (or ideally a multiple of the batch size). Maybe this is too specific for adding to the library, so I wanted to ask if such a thing would be interesting in general before I start working on it.

Ref-counting for shared-parameters

Currently shared-parameters have owners. And the owner is responsible of destroying the blob. This could be problematic when the user want to continue using the shared network after destroying the owner network. This might not be very urgent since it seems in many cases, a Mocha program is typically the following structure

  1. start
  2. optionally load saved network
  3. do some computation
  4. optionally save the network
  5. release all resources

But if there is a good way to do ref-counting in Julia, it could make managing shared parameters (and many other resources) much cleaner in Mocha.

Running test failed

Unable to run Mocha. Running Julia on Mac OSX 10.10.2. When I tried the default test, it failed with the following message

julia> Pkg.test("Mocha")
INFO: Testing Mocha
20-Jul 16:17:59:INFO:root:Configuring Mocha...
20-Jul 16:17:59:INFO:root: * CUDA disabled by default
20-Jul 16:17:59:INFO:root: * Native Ext disabled by default
20-Jul 16:17:59:INFO:root:Mocha configured, continue loading module...
ERROR: JLD not found
in require at loading.jl:47
in include at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include_from_node1 at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include_from_node1 at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in reload_path at loading.jl:152
in _require at loading.jl:67
in require at loading.jl:51
in include at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include_from_node1 at loading.jl:128
in process_options at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in _start at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
while loading /Users/usrname/.julia/v0.3/Mocha/src/utils/io.jl, in expression starting on line 43
while loading /Users/usrname/.julia/v0.3/Mocha/src/Mocha.jl, in expression starting on line 17
while loading /Users/usrname/.julia/v0.3/Mocha/test/runtests.jl, in expression starting on line 8

========================================================[ ERROR: Mocha ]========================================================

failed process: Process(/Applications/Julia-0.3.10.app/Contents/Resources/julia/bin/julia /Users/usrname/.julia/v0.3/Mocha/test/runtests.jl, ProcessExited(1)) [1]

INFO: No packages to install, update or remove
ERROR: Mocha had test errors
in error at error.jl:21
in test at pkg/entry.jl:718
in anonymous at pkg/dir.jl:28
in cd at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in cd at pkg/dir.jl:28
in test at pkg.jl:67

And tried just this
using Mocha

That gave this error:

20-Jul 16:23:10:INFO:root:Configuring Mocha...
20-Jul 16:23:10:INFO:root: * CUDA disabled by default
20-Jul 16:23:10:INFO:root: * Native Ext disabled by default
20-Jul 16:23:10:INFO:root:Mocha configured, continue loading module...
ERROR: JLD not found
in require at loading.jl:47
in include at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include_from_node1 at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in include_from_node1 at /Applications/Julia-0.3.10.app/Contents/Resources/julia/lib/julia/sys.dylib
in reload_path at loading.jl:152
in _require at loading.jl:67
in require at loading.jl:51
while loading /Users/usrname/.julia/v0.3/Mocha/src/utils/io.jl, in expression starting on line 43
while loading /Users/usrname/.julia/v0.3/Mocha/src/Mocha.jl, in expression starting on line 17

Dimensionality problems in SquareLossLayer?

We've had some suspicions about the SquareLossLayer perhaps not giving us correct answers so I went into the SquareLossLayer code and replace the BLAS operations with:
state.loss = (0.5/get_num(pred))*(vecnorm(label.data - state.pred_copy.data))^2
just to see if that would give different answers.

The result was a dimensionality mismatch error at that line.

I figured out how to use Debug and set a breakpoint at the line prior and checked the dimensions:

debug:66> size(label.data)
(750,750,10) (our image data size is 750x750 and the 10 is the batch size)
debug:66> size(state.pred_copy.data)
(750,750,1,10) (750x750 image, 1 example, batchsize 10)

The BLAS functions AFAIKT don't do any kind of dimension check so I'm wondering if we're actually getting an incorrect answer as the underlying BLAS C code just moves the pointer through memory?

Errors with OpenMP on OS X

I get the following errors building Mocha.jl:

julia> Pkg.build("Mocha")
INFO: Building Blosc
INFO: Building Homebrew
HEAD is now at 5d05a25 test-bot: unlink conflict formulae during the test
HEAD is now at 9c64dd6 Merge branch 'staging'
INFO: Building HDF5
INFO: Building Mocha
Running `g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp`
ld: library not found for -lgomp
clang: error: linker command failed with exit code 1 (use -v to see invocation)
==================================[ ERROR: Mocha ]===================================

failed process: Process(`g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp`, ProcessExited(1)) [1]
while loading /Users/tf/.julia/v0.3/Mocha/deps/build.jl, in expression starting on line 23

=====================================================================================

==================================[ BUILD ERRORS ]===================================

WARNING: Mocha had build errors.

 - packages with build errors remain installed in /Users/tf/.julia/v0.3
 - build the package(s) and all dependencies with `Pkg.build("Mocha")`
 - build a single package by running its `deps/build.jl` script

=====================================================================================

I set MOCHA_FORCE_OMP=/usr/local/Cellar/gcc/4.9.2_1 hence I used Homebrew to install gcc with OpenMP.

Statistics for SquareLossLayer

Is there a way to test a net during training using the ValidationPerformance-coffeebreak when doing regression with a final SquareLossLayer? It seems to me that only the AccuracyLayer generates statistics but I think this would also be very useful for the SquareLossLayer. Am I missing something or is this just not yet implemented?

Thanks a lot! Mocha is a really great tool!

Making solvers more general

I'm working on an implementation of RMSProp for Mocha, but I quickly ran into the issue of it needing more parameters than SGD (to control things like the exponential window, and adaptive step sizes) but there not being a good way to define them since the acceptable parameters in SolverParameters are hard coded. I see a couple of ways to deal with this.

  1. Create an abstract type SolverParameters, and have the SGDParameters and RMSPropParameters constructors be subtypes. This seems like it would break the least things.

  2. Get rid of SolverParameters all together and define solvers as below:

solver = SGD(max_iter=600*2000, regu_coef=0.0,
mom_policy=MomPolicy.Linear(0.5, 0.0008, 600, 0.9),
lr_policy=LRPolicy.Step(0.1, 0.998, 600),
load_from=base_dir)

This would break more things, but may simplify things overall. If you can think of a better way to deal with this, I'd love to hear.

A related issue is the updates in the general solver loop:
solver_state.learning_rate = get_learning_rate(solver.params.lr_policy, solver_state)
solver_state.momentum = get_momentum(solver.params.mom_policy, solver_state)

Should these be moved into the individual solvers, eg SGD, to keep the solver loop maximally general? This would cleanly allow new solvers to have additional parameters that vary across iterations.

I can work on both these things, but wanted to discuss first.

Reporting current LR and Momentum

When the docs for "TrainingSummary" say:

 Reporting for other solver states like the current learning rate and momentum could be easily added.

Does this mean I need to edit the source to add this capability? Or is there a different way?

Thanks!

Tests fail on Julia 0.3.3

Pkg.test("Mocha") succeeds on 0.3.2, but fails on 0.3.3 with the following error. Tested both with Mocha 0.0.2 as well as current master. Tested on OSX 10.9.

julia> VERSION
v"0.3.3"

julia> Pkg.test("Mocha")
INFO: Testing Mocha
26-Nov 15:25:54:INFO:root:Configuring Mocha...
26-Nov 15:25:54:INFO:root: * CUDA       disabled by default
26-Nov 15:25:54:INFO:root: * Native Ext disabled by default
26-Nov 15:25:54:INFO:root:Mocha configured, continue loading module...
-- Testing RawBLAS{Float32} Utilities
ERROR: ccall: could not find function sgemm_ in library libopenblas
 in gemm! at /Users/rene/.julia/v0.3/Mocha/src/utils/blas.jl:10
 in test_raw_blas at /Users/rene/.julia/v0.3/Mocha/test/utils/blas.jl:11
 in test_raw_blas at /Users/rene/.julia/v0.3/Mocha/test/utils/blas.jl:33
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354
 in _start at /Users/rene/local/julia/usr/lib/julia/sys.dylib
while loading /Users/rene/.julia/v0.3/Mocha/test/utils/blas.jl, in expression starting on line 37
while loading /Users/rene/.julia/v0.3/Mocha/test/runtests.jl, in expression starting on line 27

==================================================[ ERROR: Mocha ]===================================================

failed process: Process(`/Users/rene/local/julia/usr/bin/julia /Users/rene/.julia/v0.3/Mocha/test/runtests.jl`, ProcessExited(1)) [1]

=====================================================================================================================
INFO: No packages to install, update or remove
ERROR: Mocha had test errors
 in error at error.jl:21
 in test at pkg/entry.jl:718
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in test at pkg.jl:67

Thanks!

Problem compiling logistic_loss CUDA implementation

When I try to compile the CUDA kernels, I get this error message:

nvcc -ptx kernels.cu
logistic_loss.impl(31): error: identifier "atomicAdd" is undefined

1 error detected in the compilation of "/tmp/tmpxft_0000722c_00000000-6_kernels.cpp1.ii".
make: *** [kernels.ptx] Error 2

I am using CUDA 4.2 as that's what is available on the cluster I am accessing. Maybe that's too old?

Using caffe without cuDNN

Hi there. Is it possible to use Caffe as a backend without cuDNN? (since it requires a Nvidia compute capability of 3.0, my gtx580 can't use it).

I tried commenting backend.cudnn_ctx = CuDNN.create() in backend.jl, but it, uhh, didn't work.

Thanks in advance.

$ julia mnist.jl 
12-Feb 19:33:24:INFO:root:Configuring Mocha...
12-Feb 19:33:24:INFO:root: * CUDA       enabled (MOCHA_USE_CUDA environment variable detected)
12-Feb 19:33:24:INFO:root: * Native Ext disabled by default
12-Feb 19:33:24:INFO:root:Mocha configured, continue loading module...
12-Feb 19:33:27:INFO:root:Initializing CuDNN backend...
12-Feb 19:33:27:INFO:root:CUDA backend initialized without cuDNN.
12-Feb 19:33:28:INFO:root:Constructing net MNIST-train on GPUBackend...
12-Feb 19:33:28:INFO:root:Topological sorting 8 layers...
12-Feb 19:33:28:INFO:root:Setup layers...
12-Feb 19:33:29:INFO:root:Network constructed!
12-Feb 19:33:29:INFO:root:Constructing net MNIST-test on GPUBackend...
12-Feb 19:33:29:INFO:root:Topological sorting 8 layers...
12-Feb 19:33:29:INFO:root:Setup layers...
12-Feb 19:33:29:DEBUG:root:ConvolutionLayer(conv1): sharing filters and bias
12-Feb 19:33:29:DEBUG:root:ConvolutionLayer(conv2): sharing filters and bias
12-Feb 19:33:29:DEBUG:root:InnerProductLayer(ip1): sharing weights and bias
12-Feb 19:33:29:DEBUG:root:InnerProductLayer(ip2): sharing weights and bias
12-Feb 19:33:29:INFO:root:Network constructed!
12-Feb 19:33:30:DEBUG:root:Checking network topology for back-propagation
12-Feb 19:33:30:DEBUG:root:Init network MNIST-train
12-Feb 19:33:30:DEBUG:root:Init parameter filter for layer conv1
12-Feb 19:33:30:DEBUG:root:Init parameter bias for layer conv1
12-Feb 19:33:30:DEBUG:root:Init parameter filter for layer conv2
12-Feb 19:33:30:DEBUG:root:Init parameter bias for layer conv2
12-Feb 19:33:30:DEBUG:root:Init parameter weight for layer ip1
12-Feb 19:33:30:DEBUG:root:Init parameter bias for layer ip1
12-Feb 19:33:30:DEBUG:root:Init parameter weight for layer ip2
12-Feb 19:33:30:DEBUG:root:Init parameter bias for layer ip2

signal (11): Segmentation fault
unknown function (ip: -1140687215)
cudnnConvolutionForward at /usr/local/lib/libcudnn.so.6.5 (unknown line)
convolution_forward at /home/notnikolaos/.julia/v0.3/Mocha/src/cuda/cudnn.jl:41
jlcall_convolution_forward_20757 at  (unknown line)
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
forward at /home/notnikolaos/.julia/v0.3/Mocha/src/cuda/layers/convolution.jl:83
jlcall_forward_20752 at  (unknown line)
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
forward at /home/notnikolaos/.julia/v0.3/Mocha/src/net.jl:137
jlcall_forward_20729 at  (unknown line)
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
solve at /home/notnikolaos/.julia/v0.3/Mocha/src/solvers.jl:222
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
unknown function (ip: -917840712)
unknown function (ip: -917844544)
unknown function (ip: -917777094)
unknown function (ip: -917774387)
jl_load at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
include at ./boot.jl:245
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
include_from_node1 at loading.jl:128
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
process_options at ./client.jl:285
_start at ./client.jl:354
jlcall__start_17289 at /usr/bin/../lib/x86_64-linux-gnu/julia/sys.so (unknown line)
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
unknown function (ip: 4200623)
julia_trampoline at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown line)
unknown function (ip: 4199613)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 4199667)
unknown function (ip: 0)
Segmentation fault (core dumped)

ERROR: HDF5OutputLayer: output file 'C:\Users\....tmp' already exists

Below is console output describing error, I presume this is wrong place to ask for help but would welcome any suggestion as to where to start / look for help?

21-Mar 16:11:02:INFO:root:Configuring Mocha...
21-Mar 16:11:02:INFO:root: * CUDA disabled by default
21-Mar 16:11:02:INFO:root: * Native Ext disabled by default
21-Mar 16:11:02:INFO:root:Mocha configured, continue loading module...
-- Testing network topology with duplicated blobs
21-Mar 16:11:08:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:08:INFO:root:Topological sorting 1 layers...
21-Mar 16:11:08:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:08:INFO:root:Topological sorting 2 layers...
-- Testing network topology with missing blobs
21-Mar 16:11:08:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:08:INFO:root:Topological sorting 1 layers...
-- Testing network topology with circular dependency
21-Mar 16:11:08:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:08:INFO:root:Topological sorting 2 layers...
-- Testing network topology with multiple back-propagate path
> Good blob sharing
21-Mar 16:11:08:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:08:INFO:root:Topological sorting 5 layers...
21-Mar 16:11:08:INFO:root:Setup layers...
21-Mar 16:11:09:INFO:root:Network constructed!
21-Mar 16:11:09:DEBUG:root:Destroying network net
> Bad blob sharing
21-Mar 16:11:09:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:09:INFO:root:Topological sorting 6 layers...
21-Mar 16:11:09:INFO:root:Setup layers...
21-Mar 16:11:09:INFO:root:Network constructed!
-- Testing network topology with dangling blob
> Good case
21-Mar 16:11:09:INFO:root:Constructing net net on CPUBackend...
21-Mar 16:11:09:INFO:root:Topological sorting 4 layers...
21-Mar 16:11:09:INFO:root:Setup layers...
ERROR: HDF5OutputLayer: output file 'C:\Users\wolfe_000\AppData\Local\Temp\jul4BB6.tmp' already exists
in error at error.jl:21 (repeats 2 times)
while loading C:\Users\wolfe_000.julia\v0.3\Mocha\test\net/topology.jl, in expression starting on line 112
while loading C:\Users\wolfe_000.julia\v0.3\Mocha\test\runtests.jl, in expression starting on line 33

================================[ ERROR: Mocha ]================================

failed process: Process('c:\Program Files (x86)\Julia\resources\app\julia\bin/julia' 'C:\Users\wolfe_000\.julia\v0.3\Mocha\test\runtests.jl', ProcessExited(1)) [1]

Building from source on a Mac

Building on Linux works perfectly but building on Mac fails. Julia never finishes loading the Mocha:
Source Code:
use_gpu = false
if use_gpu
ENV["MOCHA_USE_CUDA"] = "true"
else
ENV["MOCHA_USE_NATIVE_EXT"] = "true"
ENV["OMP_NUM_THREADS"] = 1
blas_set_num_threads(1)
end
using Mocha
using HDF5
When killed after 1h:
^CERROR: interrupt
while loading /Users/sjv/Documents/Mocha2/src/Logging.jl, in expression starting on line 1
while loading /Users/sjv/Documents/Mocha2/src/logging.jl, in expression starting on line 1
while loading /Users/sjv/Documents/Mocha2/src/Mocha.jl, in expression starting on line 4

Anybody any ideas?

Mocha.jl vs a Caffe binding approach

Hi,
I found Mocha.jl while searching for a julia binding for Caffe. Your work is really interesting and I wonder whether Mocha is written from scratch or uses the Caffe core components ?
I'm thinking of to create a Julia binding for Caffe essentially using FFI with a Julia interfaces and I want to know how it will compare to Mocha ? Is it worth to do or Mocha will do the job ?

Cheers,

Jao

Roadmap

Discussions and/or suggestions are welcome!

  • Interface
    • Network architecture visualization
    • Recurrent Neural Networks
  • Infrastructure
    • CUDA Stream
    • Multi-GPU support
    • 4D tensor -> ND tensor (#21, #25)
    • Unsupervised Learning ((deep) autoencoders and variants) (#29)
  • Document
    • Developer's Guide

Mocha Instalin Error: ... could not spawn `g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp`:

Pkg.test("Mocha") is on the end...
_
_ _ ()_ | A fresh approach to technical computing
() | () () | Documentation: http://docs.julialang.org
_ _ | | __ _ | Type "help()" for help.
| | | | | | |/ ` | |
| | |
| | | | (
| | | Version 0.3.3 (2014-11-23 20:19 UTC)
/ |_'|||__'| | Official http://julialang.org/ release
|__/ | x86_64-w64-mingw32

julia> Pkg.add("Mocha")
WARNING: julia is fixed at 0.3.3 conflicting with requirement for LinearAlgebra: [0.4.0-,?)
INFO: Cloning cache of Logging from git://github.com/kmsquire/Logging.jl.git
INFO: Cloning cache of Mocha from git://github.com/pluskid/Mocha.jl.git
INFO: Installing Logging v0.0.5
INFO: Installing Mocha v0.0.6
INFO: Building Blosc
INFO: Building LibCURL
INFO: Building WinRPM
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1//repodata/repomd.x
ml
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1//repodata/repomd.x
ml
INFO: Building HDF5
INFO: Building Mocha
Running g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp
=================================================[ ERROR: Mocha ]=================================================

could not spawn g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp: no such file or d
irectory (ENOENT)
while loading C:\Users\SAMSUNG2.julia\v0.3\Mocha\deps\build.jl, in expression starting on line 23

=================================================[ BUILD ERRORS ]=================================================

WARNING: Mocha had build errors.

  • packages with build errors remain installed in C:\Users\SAMSUNG2.julia\v0.3
  • build a package and all its dependencies with Pkg.build(pkg)
  • build a single package by running its deps/build.jl script

INFO: Package database updated

julia> using Mocha
24-sty 16:28:58:INFO:root:Configuring Mocha...
24-sty 16:28:58:INFO:root: * CUDA disabled by default
24-sty 16:28:58:INFO:root: * Native Ext disabled by default
24-sty 16:28:58:INFO:root:Mocha configured, continue loading module...
Warning: using Mocha.Layer in module Main conflicts with an existing identifier.
Warning: using Mocha.dec in module Main conflicts with an existing identifier.

julia> Pkg.update()
INFO: Updating METADATA...
INFO: Updating cache of MinimalPerfectHashes...
INFO: Updating cache of MinimalPerfectHashes...
INFO: Updating LinearAlgebra...
INFO: Updating FLANN...
INFO: Updating Distributions...
INFO: Computing changes...
WARNING: julia is fixed at 0.3.3 conflicting with requirement for LinearAlgebra: [0.4.0-,?)
INFO: No packages to install, update or remove

julia> Pkg.build("Mocha")
INFO: Building Blosc
INFO: Building LibCURL
INFO: Building WinRPM
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1//repodata/repomd.x
ml
INFO: Downloading http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1//repodata/repomd.x
ml
INFO: Building HDF5
INFO: Building Mocha
Running g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp
=================================================[ ERROR: Mocha ]=================================================

could not spawn g++ -fPIC -Wall -O3 -shared -fopenmp -o libmochaext.so im2col.cpp pooling.cpp: no such file or d
irectory (ENOENT)
while loading C:\Users\SAMSUNG2.julia\v0.3\Mocha\deps\build.jl, in expression starting on line 23

=================================================[ BUILD ERRORS ]=================================================

julia> Pkg.test("Mocha")
INFO: Testing Mocha
24-sty 16:36:31:INFO:root:Configuring Mocha...
24-sty 16:36:31:INFO:root: * CUDA disabled by default
24-sty 16:36:31:INFO:root: * Native Ext disabled by default
24-sty 16:36:31:INFO:root:Mocha configured, continue loading module...
-- Testing network topology with duplicated blobs
24-sty 16:36:38:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:38:INFO:root:Topological sorting 1 layers...
24-sty 16:36:38:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:38:INFO:root:Topological sorting 2 layers...
-- Testing network topology with missing blobs
24-sty 16:36:38:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:38:INFO:root:Topological sorting 1 layers...
-- Testing network topology with circular dependency
24-sty 16:36:38:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:38:INFO:root:Topological sorting 2 layers...
-- Testing network topology with multiple back-propagate path
> Good blob sharing
24-sty 16:36:38:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:38:INFO:root:Topological sorting 5 layers...
24-sty 16:36:38:INFO:root:Setup layers...
24-sty 16:36:39:INFO:root:Network constructed!
24-sty 16:36:40:DEBUG:root:Destroying network net
> Bad blob sharing
24-sty 16:36:40:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:40:INFO:root:Topological sorting 6 layers...
24-sty 16:36:40:INFO:root:Setup layers...
24-sty 16:36:40:INFO:root:Network constructed!
-- Testing network topology with dangling blob
> Good case
24-sty 16:36:40:INFO:root:Constructing net net on CPUBackend...
24-sty 16:36:40:INFO:root:Topological sorting 4 layers...
24-sty 16:36:40:INFO:root:Setup layers...
ERROR: HDF5OutputLayer: output file 'C:\Users\SAMSUNG2\AppData\Local\Temp\julF77E.tmp' already exists
in error at error.jl:21
while loading C:\Users\SAMSUNG2.julia\v0.3\Mocha\test\net/topology.jl, in expression starting on line 112
while loading C:\Users\SAMSUNG2.julia\v0.3\Mocha\test\runtests.jl, in expression starting on line 33

=================================================[ ERROR: Mocha ]=================================================

failed process: Process('C:\Users\SAMSUNG2\AppData\Local\Julia-0.3.3\bin/julia' 'C:\Users\SAMSUNG2\.julia\v0.3\Mo cha\test\runtests.jl', ProcessExited(1)) [1]

WARNING: Mocha had build errors.

  • packages with build errors remain installed in C:\Users\SAMSUNG2.julia\v0.3
  • build a package and all its dependencies with Pkg.build(pkg)
  • build a single package by running its deps/build.jl script

julia> using Mocha

julia> data = HDF5DataLayer(name="train-data",source="train-data-list.txt",batch_size=64)
HDF5DataLayer(train-data)

julia>

julia> pool = PoolingLayer(name="pool1",kernel=(2,2),stride=(2,2),bottoms=[:conv],tops=[:pool])
PoolingLayer(pool1)

julia> conv2 = ConvolutionLayer(name="conv2",n_filter=50,kernel=(5,5),bottoms=[:pool],tops=[:conv2])
Warning: imported binding for conv2 overwritten in module Main
ConvolutionLayer(conv2)

julia> pool2 = PoolingLayer(name="pool2",kernel=(2,2),stride=(2,2),bottoms=[:conv2],tops=[:pool2])
PoolingLayer(pool2)

julia> fc1 = InnerProductLayer(name="ip1",output_dim=500,neuron=Neurons.ReLU(),bottoms=[:pool2],
tops=[:ip1])
InnerProductLayer(ip1)

julia> fc2 = InnerProductLayer(name="ip2",output_dim=10,bottoms=[:ip1],tops=[:ip2])
InnerProductLayer(ip2)

julia> loss = SoftmaxLossLayer(name="loss",bottoms=[:ip2,:label])
SoftmaxLossLayer(loss)

julia>

julia> backend = GPUBackend()
ERROR: GPUBackend not defined

julia> init(backend)
ERROR: backend not defined

Paul

Cannot compile dump_network_hdf5.cpp

Attempting to compile dump_network_hdf5.cpp with Caffe (current version) results in a litany of failures. Compiling Caffe without it works fine. Any advice appreciated.

tools/dump_network_hdf5.cpp:25:17: error: no type named 'set_phase' in 'caffe::Caffe'
  caffe::Caffe::set_phase(caffe::Caffe::TEST);
  ~~~~~~~~~~~~~~^
tools/dump_network_hdf5.cpp:25:41: error: definition or redeclaration of 'TEST' not allowed inside a function
  caffe::Caffe::set_phase(caffe::Caffe::TEST);
                          ~~~~~~~~~~~~~~^
tools/dump_network_hdf5.cpp:39:3: error: reference to 'shared_ptr' is ambiguous
  shared_ptr<Net<float> > caffe_net;
  ^
/Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.10.xctoolchain/usr/bin/../include/c++/v1/memory:3750:29: note: candidate found by name lookup is 'std::__1::shared_ptr'
class _LIBCPP_TYPE_VIS_ONLY shared_ptr
                            ^
./include/caffe/common.hpp:75:14: note: candidate found by name lookup is 'caffe::shared_ptr'
using boost::shared_ptr;
             ^
tools/dump_network_hdf5.cpp:40:3: error: use of undeclared identifier 'caffe_net'; did you mean 'caffe_set'?
  caffe_net.reset(new Net<float>(network_params));
  ^~~~~~~~~
  caffe_set
./include/caffe/util/math_functions.hpp:40:6: note: 'caffe_set' declared here
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:40:3: error: reference to overloaded function could not be resolved; did you mean to call it?
  caffe_net.reset(new Net<float>(network_params));
  ^~~~~~~~~
./include/caffe/util/math_functions.hpp:40:6: note: possible target for call
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:41:3: error: use of undeclared identifier 'caffe_net'; did you mean 'caffe_set'?
  caffe_net->CopyTrainedLayersFrom(network_snapshot);
  ^~~~~~~~~
  caffe_set
./include/caffe/util/math_functions.hpp:40:6: note: 'caffe_set' declared here
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:41:3: error: reference to overloaded function could not be resolved; did you mean to call it?
  caffe_net->CopyTrainedLayersFrom(network_snapshot);
  ^~~~~~~~~
./include/caffe/util/math_functions.hpp:40:6: note: possible target for call
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:43:16: error: reference to 'shared_ptr' is ambiguous
  const vector<shared_ptr<Layer<float> > >& layers = caffe_net->layers();
               ^
/Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.10.xctoolchain/usr/bin/../include/c++/v1/memory:3750:29: note: candidate found by name lookup is 'std::__1::shared_ptr'
class _LIBCPP_TYPE_VIS_ONLY shared_ptr
                            ^
./include/caffe/common.hpp:75:14: note: candidate found by name lookup is 'caffe::shared_ptr'
using boost::shared_ptr;
             ^
tools/dump_network_hdf5.cpp:43:40: error: expected '(' for function-style cast or type construction
  const vector<shared_ptr<Layer<float> > >& layers = caffe_net->layers();
                          ~~~~~~~~~~~~ ^
tools/dump_network_hdf5.cpp:43:42: error: expected unqualified-id
  const vector<shared_ptr<Layer<float> > >& layers = caffe_net->layers();
                                         ^
tools/dump_network_hdf5.cpp:44:40: error: use of undeclared identifier 'caffe_net'; did you mean 'caffe_set'?
  const vector<string> & layer_names = caffe_net->layer_names();
                                       ^~~~~~~~~
                                       caffe_set
./include/caffe/util/math_functions.hpp:40:6: note: 'caffe_set' declared here
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:44:40: error: reference to overloaded function could not be resolved; did you mean to call it?
  const vector<string> & layer_names = caffe_net->layer_names();
                                       ^~~~~~~~~
./include/caffe/util/math_functions.hpp:40:6: note: possible target for call
void caffe_set(const int N, const Dtype alpha, Dtype *X);
     ^
tools/dump_network_hdf5.cpp:46:84: error: use of undeclared identifier 'layers'; did you mean 'layer'?
    if (InnerProductLayer<float> *layer = dynamic_cast<InnerProductLayer<float> *>(layers[i].get())) {
                                                                                   ^~~~~~
                                                                                   layer
tools/dump_network_hdf5.cpp:46:35: note: 'layer' declared here
    if (InnerProductLayer<float> *layer = dynamic_cast<InnerProductLayer<float> *>(layers[i].get())) {
                                  ^
tools/dump_network_hdf5.cpp:46:94: error: no member named 'get' in 'caffe::InnerProductLayer<float>'
    if (InnerProductLayer<float> *layer = dynamic_cast<InnerProductLayer<float> *>(layers[i].get())) {
                                                                                   ~~~~~~~~~ ^
tools/dump_network_hdf5.cpp:49:89: error: use of undeclared identifier 'layers'; did you mean 'layer'?
    } else if (ConvolutionLayer<float> *layer = dynamic_cast<ConvolutionLayer<float> *>(layers[i].get())) {
                                                                                        ^~~~~~
                                                                                        layer
tools/dump_network_hdf5.cpp:46:35: note: 'layer' declared here
    if (InnerProductLayer<float> *layer = dynamic_cast<InnerProductLayer<float> *>(layers[i].get())) {
                                  ^
tools/dump_network_hdf5.cpp:49:99: error: no member named 'get' in 'caffe::InnerProductLayer<float>'
    } else if (ConvolutionLayer<float> *layer = dynamic_cast<ConvolutionLayer<float> *>(layers[i].get())) {
                                                                                        ~~~~~~~~~ ^
tools/dump_network_hdf5.cpp:62:10: error: reference to 'shared_ptr' is ambiguous
  vector<shared_ptr<Blob<float> > >& blobs = layer->blobs();
         ^
/Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.10.xctoolchain/usr/bin/../include/c++/v1/memory:3750:29: note: candidate found by name lookup is 'std::__1::shared_ptr'
class _LIBCPP_TYPE_VIS_ONLY shared_ptr
                            ^
./include/caffe/common.hpp:75:14: note: candidate found by name lookup is 'caffe::shared_ptr'
using boost::shared_ptr;
             ^
tools/dump_network_hdf5.cpp:62:33: error: expected '(' for function-style cast or type construction
  vector<shared_ptr<Blob<float> > >& blobs = layer->blobs();
                    ~~~~~~~~~~~ ^
tools/dump_network_hdf5.cpp:63:7: error: use of undeclared identifier 'blobs'
  if (blobs.size() == 0) {
      ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
make: *** [.build_release/tools/dump_network_hdf5.o] Error 1
make: *** Waiting for unfinished jobs....

Add Roadmap

A roadmap for future plans might be good to avoid duplicated effort and highlight areas where contributions are desired.

Gradient checking code

Dear all,

thanks for this package. It has the potential to become the NN package with the best tradeoff between speed and ease of use and extensibility (thanks to Julia). Great work!

I wonder if anybody has any gradient checking code lying around that works for generic networks. If we had this code, the tests could be simplified and it would be much easier to verify the correctness of extensions.

If not, I will look into writing the code.

Best,
Philipp.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.