Coder Social home page Coder Social logo

szcom / rnnlib Goto Github PK

View Code? Open in Web Editor NEW
898.0 898.0 228.0 10.56 MB

RNNLIB is a recurrent neural network library for sequence learning problems. Forked from Alex Graves work http://sourceforge.net/projects/rnnl/

License: GNU General Public License v3.0

CMake 2.05% Python 0.11% Shell 2.26% C 71.45% Makefile 6.35% C++ 4.15% CSS 0.04% HTML 3.48% Fortran 7.16% Lex 0.02% Yacc 0.06% Perl 0.04% LiveScript 0.40% Scilab 0.01% DIGITAL Command Language 0.34% TeX 0.80% M4 0.45% PLSQL 0.84%

rnnlib's People

Contributors

bquast avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnnlib's Issues

Bad use of cmake

It is currently not possible to run cmake from a directory like: mkdir build; cd build; cmake ..

Consider removing the dependency on running cmake from the root folder.

How to prime the sample

I have found the file prime_set.txt which contains four different writing style. If anyone can tell me how to generate samples of specific style by priming, it will be very helpful for me. Thanks in advance

About the training results

Hi,
I am just wonder after the training process is finished, where the trained model is stored? Is it using autosave to save the trained model? I use the autosave, but only save the intermediate results, how to save the trained model?

Thanks!
Best regards!

Binaries and network parameters files for online_prediction

Hello,

I am trying to use rnnsynth for a project. I eventually managed to compile the binaries on OSX but having trouble to use the preparation script (build_netcdf.sh) and with training the model.

Would it be possible to share the "from_step2.best_loss.save" file (generated from step 2) so users can generate plots and position output from rnnsynth, at least for the online_prediction example?

Also, considering that it is quite hard to build the binaries. It would greatly help further users to distribute a binary package for a standard Linux.

Thank you for your further help, it will be greatly appreciated!

Kevin Donnot

no more text-files in the database to download

Can you please help? I have succeeded Yesterday to download the database with all the figures (.png) resulting in the following structure of folders:

Now for training I need some text files of the form:
ascii/a01/a01-000/a01-000u.txt a.s.o. Maybe someone from yourselves still has and can share this file
Thank you in advance for your feedback/help.

check_synth2.config not in repo

From the README.md

Gradient check

To gain some confidence that the build is fine run the gradient check:

gradient_check --autosave=false check_synth2.config

I don't see check_synth2.config anywhere in the repository

build_netcdf.sh not completing?

Hi,

Apologies for this issue, I don't fully understand why this is happening though, I am getting a few errors after running the build_netcdf, it reads the data files for a while and then:

creating netcdf variable targetPatterns (6765601, 3) /Users/Andrew/Desktop/rnnlib/examples/online_prediction/../../utils/normalise_netcdf.sh: line 65: ncks: command not found online_validation.nc online_validation.nc_ADJ {'outputArrayName': 'targetPatterns', 'maxArraySize': 8000000, 'stdMeanFilename': 'online.nc', 'booleanColomn': -1, 'bigFile': False, 'inputArrayName': 'targetPatterns'} inputFilename online_validation.nc loading in input array reading std deviations and means from online.nc Traceback (most recent call last): File "/Users/Andrew/Desktop/rnnlib/examples/online_prediction/../../utils/normalise_netcdf.py", line 64, in <module> inputStds = array(stdMeanFile.variables[options.inputArrayName+'Stds'].getValue()) KeyError: 'targetPatternsStds' /Users/Andrew/Desktop/rnnlib/examples/online_prediction/../../utils/normalise_netcdf.sh: line 79: ncks: command not found rm: online_validation.nc_ADJ: No such file or directory

Any help is appreciated,

Andy

Create rnnlib nc dataset

Hi,
When I run ./build_netcdf.sh file to create rnnlib nc dataset, I get this error :

Traceback (most recent call last):
File "./arabic_offline.py", line 2, in
import netcdf_helpers
ImportError: No module named netcdf_helpers

Although, I have installed a netCDF Operator.
Please help me to resolve the issue.

change bias

How can I prime the trained rnn when running rnnsynth and change the bias? Searched for days, tried a lot via config file or command line - no luck so far ... anybody? thx so much!

Gradient Check Failed On Ubuntu 14.04 and OSX Mavericks

I'm not sure that this issue is immediately related to the code, but I figured this was the best form of communication I have for my problem. I have gcc and g++ at version 4.4 and building the executables works fine. However, if I run a gradient check, it fails on the first weight from hidden_1_0 to output. Any ideas what might be causing this issue?

Constants in show_pen.m

I am trying to adapt this program to run with my own sample sets. While doing so, I found that the outputs of the neural network don't correspond directly to relative coordinates, but instead need to have some constants multiplied and added to them, labelled muXY and devXY in show_pen.m.

Tweaking these constants, I have not found what their exact significance is, but only that if the ratio between them changes too much, the output becomes distorted.

I assume that these constants are dependent on the dataset. I have not found any references to them in the source code, nor in the original paper published by Alex Graves. What are these constants and how would I calculate them for my own dataset?

Generating sequence

Hi,
I'm trying to generate the sequence for "test" and then visualise it using show_pen. but the samples that I get are too few to represent the word test.
Note: I trained my own model following the described steps.

After running the rnnsynth, I got this result :
test
sentence:test
generating samples from network
Sample -0.196839 -0.00302385 0
Sample 0.0456886 0.205597 0
Sample 0.244101 2.64111 0
Sample -0.111819 -0.0829409 0
Sample 0.117248 -0.937839 1
Sample 1.0191 -2.28362 1
Sample -0.224143 -0.0129589 0
Sample -0.500786 -0.014552 0
Sample -0.122401 0.0879939 0
Sample 0.0576331 0.384149 0
End of sentence

The demo is not working anymore?

Hi can someone please fix this! As this is very handy to learn writing or studiying the science behind writing. Please! Or is there something similar like this?

rnnsynth stalls out

when I run rnnsynth I get this output that looks like it's loading all the weights from my model, but then it starts to load sequences and just hangs. I looked into activity monitor and rnnsynth drops down to 0% cpu. Here's my output:

$ rnnsynth trained/synth1d2015.11.16-22.46.13.644782.best_loss.save 
task = prediction

network:
task = prediction
11VerticalNet
------------------------------
5 layers:
10InputLayer "input" 1D (+) size 3
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_0_0" 1D (+) inputSize 1600 outputSize 400 source "input" 1200 peeps
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_1_0" 1D (+) inputSize 1600 outputSize 400 source "hidden_0_0" 1200 peeps
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_2_0" 1D (+) inputSize 1600 outputSize 400 source "hidden_1_0" 1200 peeps
20MixtureSamplingLayer "output" 1D (+) size 121 source "hidden_2_0"
------------------------------
20 connections:
"bias_to_output" (121 wts)
"hidden_2_0_to_output" (48400 wts)
"hidden_0_0_to_output" (48400 wts)
"hidden_1_0_to_output" (48400 wts)
"bias_to_charwindow_0" (30 wts)
"hidden_0_0_to_charwindow_0" (12000 wts)
"bias_to_hidden_0_0" (1600 wts)
"hidden_0_0_to_hidden_0_0_delay_-1" (640000 wts)
"input_to_hidden_0_0" (4800 wts)
"charwindow_0_to_hidden_0_0_delay_-1" (92800 wts)
"bias_to_hidden_1_0" (1600 wts)
"hidden_1_0_to_hidden_1_0_delay_-1" (640000 wts)
"hidden_0_0_to_hidden_1_0" (640000 wts)
"input_to_hidden_1_0" (4800 wts)
"charwindow_0_to_hidden_1_0" (92800 wts)
"bias_to_hidden_2_0" (1600 wts)
"hidden_2_0_to_hidden_2_0_delay_-1" (640000 wts)
"hidden_1_0_to_hidden_2_0" (640000 wts)
"input_to_hidden_2_0" (4800 wts)
"charwindow_0_to_hidden_2_0" (92800 wts)
------------------------------
bidirectional = false
symmetry = false
3658551 weights

setting random seed to 4203834845

loading dynamic data from trained/synth1d2015.11.16-22.46.13.644782.best_loss.save
loading trainer.epoch
loading trainer.mdlPriorMean
loading trainer.mdlPriorVariance
loading weightContainer.bias_to_charwindow_0__mdl_devs
loading weightContainer.bias_to_charwindow_0__mdl_weight_costs
loading weightContainer.bias_to_charwindow_0_mdl_dev_optimiser_deltas
loading weightContainer.bias_to_charwindow_0_mdl_dev_optimiser_g
loading weightContainer.bias_to_charwindow_0_mdl_dev_optimiser_n
loading weightContainer.bias_to_charwindow_0_weight_optimiser_deltas
loading weightContainer.bias_to_charwindow_0_weight_optimiser_g
loading weightContainer.bias_to_charwindow_0_weight_optimiser_n
loading weightContainer.bias_to_charwindow_0_weights
loading weightContainer.bias_to_hidden_0_0__mdl_devs
loading weightContainer.bias_to_hidden_0_0__mdl_weight_costs
loading weightContainer.bias_to_hidden_0_0_mdl_dev_optimiser_deltas
loading weightContainer.bias_to_hidden_0_0_mdl_dev_optimiser_g
loading weightContainer.bias_to_hidden_0_0_mdl_dev_optimiser_n
loading weightContainer.bias_to_hidden_0_0_weight_optimiser_deltas
loading weightContainer.bias_to_hidden_0_0_weight_optimiser_g
loading weightContainer.bias_to_hidden_0_0_weight_optimiser_n
loading weightContainer.bias_to_hidden_0_0_weights
loading weightContainer.bias_to_hidden_1_0__mdl_devs
loading weightContainer.bias_to_hidden_1_0__mdl_weight_costs
loading weightContainer.bias_to_hidden_1_0_mdl_dev_optimiser_deltas
loading weightContainer.bias_to_hidden_1_0_mdl_dev_optimiser_g
loading weightContainer.bias_to_hidden_1_0_mdl_dev_optimiser_n
loading weightContainer.bias_to_hidden_1_0_weight_optimiser_deltas
loading weightContainer.bias_to_hidden_1_0_weight_optimiser_g
loading weightContainer.bias_to_hidden_1_0_weight_optimiser_n
loading weightContainer.bias_to_hidden_1_0_weights
loading weightContainer.bias_to_hidden_2_0__mdl_devs
loading weightContainer.bias_to_hidden_2_0__mdl_weight_costs
loading weightContainer.bias_to_hidden_2_0_mdl_dev_optimiser_deltas
loading weightContainer.bias_to_hidden_2_0_mdl_dev_optimiser_g
loading weightContainer.bias_to_hidden_2_0_mdl_dev_optimiser_n
loading weightContainer.bias_to_hidden_2_0_weight_optimiser_deltas
loading weightContainer.bias_to_hidden_2_0_weight_optimiser_g
loading weightContainer.bias_to_hidden_2_0_weight_optimiser_n
loading weightContainer.bias_to_hidden_2_0_weights
loading weightContainer.bias_to_output__mdl_devs
loading weightContainer.bias_to_output__mdl_weight_costs
loading weightContainer.bias_to_output_mdl_dev_optimiser_deltas
loading weightContainer.bias_to_output_mdl_dev_optimiser_g
loading weightContainer.bias_to_output_mdl_dev_optimiser_n
loading weightContainer.bias_to_output_weight_optimiser_deltas
loading weightContainer.bias_to_output_weight_optimiser_g
loading weightContainer.bias_to_output_weight_optimiser_n
loading weightContainer.bias_to_output_weights
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1__mdl_devs
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1__mdl_weight_costs
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_g
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_n
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_weight_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_weight_optimiser_g
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_weight_optimiser_n
loading weightContainer.charwindow_0_to_hidden_0_0_delay_-1_weights
loading weightContainer.charwindow_0_to_hidden_1_0__mdl_devs
loading weightContainer.charwindow_0_to_hidden_1_0__mdl_weight_costs
loading weightContainer.charwindow_0_to_hidden_1_0_mdl_dev_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_1_0_mdl_dev_optimiser_g
loading weightContainer.charwindow_0_to_hidden_1_0_mdl_dev_optimiser_n
loading weightContainer.charwindow_0_to_hidden_1_0_weight_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_1_0_weight_optimiser_g
loading weightContainer.charwindow_0_to_hidden_1_0_weight_optimiser_n
loading weightContainer.charwindow_0_to_hidden_1_0_weights
loading weightContainer.charwindow_0_to_hidden_2_0__mdl_devs
loading weightContainer.charwindow_0_to_hidden_2_0__mdl_weight_costs
loading weightContainer.charwindow_0_to_hidden_2_0_mdl_dev_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_2_0_mdl_dev_optimiser_g
loading weightContainer.charwindow_0_to_hidden_2_0_mdl_dev_optimiser_n
loading weightContainer.charwindow_0_to_hidden_2_0_weight_optimiser_deltas
loading weightContainer.charwindow_0_to_hidden_2_0_weight_optimiser_g
loading weightContainer.charwindow_0_to_hidden_2_0_weight_optimiser_n
loading weightContainer.charwindow_0_to_hidden_2_0_weights
loading weightContainer.hidden_0_0_peepholes__mdl_devs
loading weightContainer.hidden_0_0_peepholes__mdl_weight_costs
loading weightContainer.hidden_0_0_peepholes_mdl_dev_optimiser_deltas
loading weightContainer.hidden_0_0_peepholes_mdl_dev_optimiser_g
loading weightContainer.hidden_0_0_peepholes_mdl_dev_optimiser_n
loading weightContainer.hidden_0_0_peepholes_weight_optimiser_deltas
loading weightContainer.hidden_0_0_peepholes_weight_optimiser_g
loading weightContainer.hidden_0_0_peepholes_weight_optimiser_n
loading weightContainer.hidden_0_0_peepholes_weights
loading weightContainer.hidden_0_0_to_charwindow_0__mdl_devs
loading weightContainer.hidden_0_0_to_charwindow_0__mdl_weight_costs
loading weightContainer.hidden_0_0_to_charwindow_0_mdl_dev_optimiser_deltas
loading weightContainer.hidden_0_0_to_charwindow_0_mdl_dev_optimiser_g
loading weightContainer.hidden_0_0_to_charwindow_0_mdl_dev_optimiser_n
loading weightContainer.hidden_0_0_to_charwindow_0_weight_optimiser_deltas
loading weightContainer.hidden_0_0_to_charwindow_0_weight_optimiser_g
loading weightContainer.hidden_0_0_to_charwindow_0_weight_optimiser_n
loading weightContainer.hidden_0_0_to_charwindow_0_weights
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1__mdl_devs
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1__mdl_weight_costs
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_deltas
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_g
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_mdl_dev_optimiser_n
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_weight_optimiser_deltas
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_weight_optimiser_g
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_weight_optimiser_n
loading weightContainer.hidden_0_0_to_hidden_0_0_delay_-1_weights
loading weightContainer.hidden_0_0_to_hidden_1_0__mdl_devs
loading weightContainer.hidden_0_0_to_hidden_1_0__mdl_weight_costs
loading weightContainer.hidden_0_0_to_hidden_1_0_mdl_dev_optimiser_deltas
loading weightContainer.hidden_0_0_to_hidden_1_0_mdl_dev_optimiser_g
loading weightContainer.hidden_0_0_to_hidden_1_0_mdl_dev_optimiser_n
loading weightContainer.hidden_0_0_to_hidden_1_0_weight_optimiser_deltas
loading weightContainer.hidden_0_0_to_hidden_1_0_weight_optimiser_g
loading weightContainer.hidden_0_0_to_hidden_1_0_weight_optimiser_n
loading weightContainer.hidden_0_0_to_hidden_1_0_weights
loading weightContainer.hidden_0_0_to_output__mdl_devs
loading weightContainer.hidden_0_0_to_output__mdl_weight_costs
loading weightContainer.hidden_0_0_to_output_mdl_dev_optimiser_deltas
loading weightContainer.hidden_0_0_to_output_mdl_dev_optimiser_g
loading weightContainer.hidden_0_0_to_output_mdl_dev_optimiser_n
loading weightContainer.hidden_0_0_to_output_weight_optimiser_deltas
loading weightContainer.hidden_0_0_to_output_weight_optimiser_g
loading weightContainer.hidden_0_0_to_output_weight_optimiser_n
loading weightContainer.hidden_0_0_to_output_weights
loading weightContainer.hidden_1_0_peepholes__mdl_devs
loading weightContainer.hidden_1_0_peepholes__mdl_weight_costs
loading weightContainer.hidden_1_0_peepholes_mdl_dev_optimiser_deltas
loading weightContainer.hidden_1_0_peepholes_mdl_dev_optimiser_g
loading weightContainer.hidden_1_0_peepholes_mdl_dev_optimiser_n
loading weightContainer.hidden_1_0_peepholes_weight_optimiser_deltas
loading weightContainer.hidden_1_0_peepholes_weight_optimiser_g
loading weightContainer.hidden_1_0_peepholes_weight_optimiser_n
loading weightContainer.hidden_1_0_peepholes_weights
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1__mdl_devs
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1__mdl_weight_costs
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_mdl_dev_optimiser_deltas
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_mdl_dev_optimiser_g
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_mdl_dev_optimiser_n
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_weight_optimiser_deltas
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_weight_optimiser_g
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_weight_optimiser_n
loading weightContainer.hidden_1_0_to_hidden_1_0_delay_-1_weights
loading weightContainer.hidden_1_0_to_hidden_2_0__mdl_devs
loading weightContainer.hidden_1_0_to_hidden_2_0__mdl_weight_costs
loading weightContainer.hidden_1_0_to_hidden_2_0_mdl_dev_optimiser_deltas
loading weightContainer.hidden_1_0_to_hidden_2_0_mdl_dev_optimiser_g
loading weightContainer.hidden_1_0_to_hidden_2_0_mdl_dev_optimiser_n
loading weightContainer.hidden_1_0_to_hidden_2_0_weight_optimiser_deltas
loading weightContainer.hidden_1_0_to_hidden_2_0_weight_optimiser_g
loading weightContainer.hidden_1_0_to_hidden_2_0_weight_optimiser_n
loading weightContainer.hidden_1_0_to_hidden_2_0_weights
loading weightContainer.hidden_1_0_to_output__mdl_devs
loading weightContainer.hidden_1_0_to_output__mdl_weight_costs
loading weightContainer.hidden_1_0_to_output_mdl_dev_optimiser_deltas
loading weightContainer.hidden_1_0_to_output_mdl_dev_optimiser_g
loading weightContainer.hidden_1_0_to_output_mdl_dev_optimiser_n
loading weightContainer.hidden_1_0_to_output_weight_optimiser_deltas
loading weightContainer.hidden_1_0_to_output_weight_optimiser_g
loading weightContainer.hidden_1_0_to_output_weight_optimiser_n
loading weightContainer.hidden_1_0_to_output_weights
loading weightContainer.hidden_2_0_peepholes__mdl_devs
loading weightContainer.hidden_2_0_peepholes__mdl_weight_costs
loading weightContainer.hidden_2_0_peepholes_mdl_dev_optimiser_deltas
loading weightContainer.hidden_2_0_peepholes_mdl_dev_optimiser_g
loading weightContainer.hidden_2_0_peepholes_mdl_dev_optimiser_n
loading weightContainer.hidden_2_0_peepholes_weight_optimiser_deltas
loading weightContainer.hidden_2_0_peepholes_weight_optimiser_g
loading weightContainer.hidden_2_0_peepholes_weight_optimiser_n
loading weightContainer.hidden_2_0_peepholes_weights
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1__mdl_devs
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1__mdl_weight_costs
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_mdl_dev_optimiser_deltas
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_mdl_dev_optimiser_g
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_mdl_dev_optimiser_n
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_weight_optimiser_deltas
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_weight_optimiser_g
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_weight_optimiser_n
loading weightContainer.hidden_2_0_to_hidden_2_0_delay_-1_weights
loading weightContainer.hidden_2_0_to_output__mdl_devs
loading weightContainer.hidden_2_0_to_output__mdl_weight_costs
loading weightContainer.hidden_2_0_to_output_mdl_dev_optimiser_deltas
loading weightContainer.hidden_2_0_to_output_mdl_dev_optimiser_g
loading weightContainer.hidden_2_0_to_output_mdl_dev_optimiser_n
loading weightContainer.hidden_2_0_to_output_weight_optimiser_deltas
loading weightContainer.hidden_2_0_to_output_weight_optimiser_g
loading weightContainer.hidden_2_0_to_output_weight_optimiser_n
loading weightContainer.hidden_2_0_to_output_weights
loading weightContainer.input_to_hidden_0_0__mdl_devs
loading weightContainer.input_to_hidden_0_0__mdl_weight_costs
loading weightContainer.input_to_hidden_0_0_mdl_dev_optimiser_deltas
loading weightContainer.input_to_hidden_0_0_mdl_dev_optimiser_g
loading weightContainer.input_to_hidden_0_0_mdl_dev_optimiser_n
loading weightContainer.input_to_hidden_0_0_weight_optimiser_deltas
loading weightContainer.input_to_hidden_0_0_weight_optimiser_g
loading weightContainer.input_to_hidden_0_0_weight_optimiser_n
loading weightContainer.input_to_hidden_0_0_weights
loading weightContainer.input_to_hidden_1_0__mdl_devs
loading weightContainer.input_to_hidden_1_0__mdl_weight_costs
loading weightContainer.input_to_hidden_1_0_mdl_dev_optimiser_deltas
loading weightContainer.input_to_hidden_1_0_mdl_dev_optimiser_g
loading weightContainer.input_to_hidden_1_0_mdl_dev_optimiser_n
loading weightContainer.input_to_hidden_1_0_weight_optimiser_deltas
loading weightContainer.input_to_hidden_1_0_weight_optimiser_g
loading weightContainer.input_to_hidden_1_0_weight_optimiser_n
loading weightContainer.input_to_hidden_1_0_weights
loading weightContainer.input_to_hidden_2_0__mdl_devs
loading weightContainer.input_to_hidden_2_0__mdl_weight_costs
loading weightContainer.input_to_hidden_2_0_mdl_dev_optimiser_deltas
loading weightContainer.input_to_hidden_2_0_mdl_dev_optimiser_g
loading weightContainer.input_to_hidden_2_0_mdl_dev_optimiser_n
loading weightContainer.input_to_hidden_2_0_weight_optimiser_deltas
loading weightContainer.input_to_hidden_2_0_weight_optimiser_g
loading weightContainer.input_to_hidden_2_0_weight_optimiser_n
loading weightContainer.input_to_hidden_2_0_weights
epoch = 3

loading sequences from 0 to 10748

It never moves past this point. I didn't let my networks train until 20 epochs, but would that make a difference here? Does rnnsynth need any additional input parameters? I did notice this line during the CMAKE build process but assumed it was fine with the library that it found

NO MKL Libs using: /Users/aferriss/Desktop/rnnlib/install/lib/libopenblas.a

"Cmake --build." Is not completed

I followed the manual to include the library and build it. However, an error occurs during the build process. Is there a better way to deal with it?

In file included from /Users/home/rnnlib/src/Val.cpp:22:
In file included from /Users/home/rnnlib/src/MultilayerNet.hpp:21:
In file included from /Users/home/rnnlib/src/Mdrnn.hpp:30:
/Users/home/rnnlib/src/NetcdfDataset.hpp:26:10: fatal error:
'netcdfcpp.h' file not found
#include "netcdfcpp.h"
^~~~~~~~~~~~~
2 warnings and 1 error generated.
make[2]: *** [CMakeFiles/rnnval.dir/src/Val.cpp.o] Error 1
make[1]: *** [CMakeFiles/rnnval.dir/all] Error 2
make: *** [all] Error 2

cmake

CMake Error: The source directory "/home/user/Desktop/rnnlib-master" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.

online_prediction train error

I don't know what happened and how to handle it.
When I run this command, I got the result as follow:
@szcom
Is it right?
writing to log file [email protected]
task = prediction

network:
task = prediction
13MultilayerNet

5 layers:
10InputLayer "input" 1D (+) size 3
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_0_0" 1D (+) inputSize 1600 outputSize 400 source "input" 1200 peeps
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_1_0" 1D (+) inputSize 1600 outputSize 400 source "hidden_0_0" 1200 peeps
(R) 11Lstm1dLayerI4TanhS0_8LogisticE "hidden_2_0" 1D (+) inputSize 1600 outputSize 400 source "hidden_1_0" 1200 peeps
18MixtureOutputLayer "output" 1D (+) size 121 source "hidden_2_0"

20 connections:
"bias_to_output" (121 wts)
"hidden_2_0_to_output" (48400 wts)
"hidden_0_0_to_output" (48400 wts)
"hidden_1_0_to_output" (48400 wts)
"bias_to_charwindow_0" (30 wts)
"hidden_0_0_to_charwindow_0" (12000 wts)
"bias_to_hidden_0_0" (1600 wts)
"hidden_0_0_to_hidden_0_0_delay_-1" (640000 wts)
"input_to_hidden_0_0" (4800 wts)
"charwindow_0_to_hidden_0_0_delay_-1" (92800 wts)
"bias_to_hidden_1_0" (1600 wts)
"hidden_1_0_to_hidden_1_0_delay_-1" (640000 wts)
"hidden_0_0_to_hidden_1_0" (640000 wts)
"input_to_hidden_1_0" (4800 wts)
"charwindow_0_to_hidden_1_0" (92800 wts)
"bias_to_hidden_2_0" (1600 wts)
"hidden_2_0_to_hidden_2_0_delay_-1" (640000 wts)
"hidden_1_0_to_hidden_2_0" (640000 wts)
"input_to_hidden_2_0" (4800 wts)
"charwindow_0_to_hidden_2_0" (92800 wts)

bidirectional = false
symmetry = false
3658551 weights

setting random seed to 1006135989

3658551 uninitialised weights randomised uniformly in [-0.1,0.1]
trainer:
epoch = 0
savename = [email protected]
batchLearn = false
seqsPerWeightUpdate = 1
maxTestsNoBest = 20

training data:
numSequences = 10748
numTimesteps = 6765601
avg timesteps/seq = 629.475
1 filenames
online.nc
inputSize = 3
outputSize = 0
numDims = 1
task = prediction
shuffled = true

validation data:
numSequences = 1438
numTimesteps = 885964
avg timesteps/seq = 616.108
1 filenames
online_validation.nc
inputSize = 3
outputSize = 0
numDims = 1
task = prediction
shuffled = false


rmsprop
learnRate = 0.0001
momentum = 0.9

autosave filename [email protected]
best save filename root [email protected]

training...
loading sequences from 0 to 10748

epoch 0 took 7 hours 13 minutes 27 seconds (2.41971 secs/seq, 260.145 its/sec, 3709.15 MwtOps/sec)

train errors (running):
loss -753.446

validation errors:
loading sequences from 0 to 1438
loss -866.95

saving to [email protected]
best network (loss)
saving to [email protected]_loss.save

epoch 1 took 6 hours 22 minutes 42 seconds (2.1364 secs/seq, 294.643 its/sec, 4201.02 MwtOps/sec)

train errors (running):
loss -914.131

validation errors:

Make work with CMake 3.0 and newer

I needed to add this CMakeLists.txt to make it work with CMake 3.0 and newer:

    IF(CMAKE_VERSION VERSION_EQUAL "3.0.0" OR
       CMAKE_VERSION VERSION_GREATER "3.0.0")
     CMAKE_POLICY(SET CMP0045 OLD)
    ENDIF()

How to create Labels ?

Hi everyone,
I wonder how to create labels ? I noticed in the arabic offline there is a .tru file per image including a couple of fields. I don't understand what they are. I went over the Python code generating the binary files. I got some ideas but I really don't get what this is "wordTargetStrings" ?
In .tru file there is a field or keyword called "ZIP" and wordTargetStrings is just getting the values in front of that.
I appreciate if someone can clarify that for me.
Cheers,
Saman

gradient_check: command not found

after running ./build netcdf.sh command when i run gradient_check check_synth1d.config command it shows gradient_check: command not found. could you please help me out. Thanks a lot..

Help for getting ScientificPython on OSX

On OSX El Capitan, I'm trying to run build/examples/online_prediction/build_netcdf.sh. I'm running into problems with the ScientificPython.

The latest ScientificPython binary on osx require python 2.4. That python is so old that pyenv is having trouble building it.

So I tried build the latest development ScientificPython (2.9.4) with the python I have installed 2.7.11. Other than spitting out a ton of unused function warnings, the build seems to have succeeded. However, when I attempt to run build_netcdf, this is what I get:

./build_netcdf.sh

Traceback (most recent call last):
  File "./online_delta.py", line 3, in <module>
    import netcdf_helpers
  File "/Users/jbryson3/code/libs/rnnlib/utils/netcdf_helpers.py", line 18, in <module>
    from Scientific.IO.NetCDF import NetCDFFile
  File "/usr/local/lib/python2.7/site-packages/Scientific/IO/NetCDF.py", line 168, in <module>
    from Scientific._netcdf import *
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/Scientific/_netcdf.so, 2): Symbol not found: _H5P_CLS_DATASET_ACCESS_ID_g
  Referenced from: /usr/local/lib/python2.7/site-packages/Scientific/_netcdf.so
  Expected in: flat namespace
 in /usr/local/lib/python2.7/site-packages/Scientific/_netcdf.so

I realize this may be a bit out of scope for this codebase, but was hoping someone could point me in the right direction. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.