Coder Social home page Coder Social logo

word-rnn's People

Contributors

ajkl avatar audy avatar donglixp avatar ericzeiberg avatar germuth avatar guillitte avatar hughperkins avatar johnny5550822 avatar josephmmisiti avatar karpathy avatar larspars avatar makemeasandwich avatar maurizi avatar mpiffault avatar siemiatj avatar skwerlman avatar udibr avatar yafahedelman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

word-rnn's Issues

bad argument #2 to '?' (start index out of bound)

i'm new to both torch and lua, i ran across this issue, which drives me crazy:
/home/handa/torch/install/bin/luajit: bad argument #2 to '?' (start index out of bound at /tmp/luarocks_torch-scm-1-9476/torch7/generic/Tensor.c:984)
stack traceback:
[C]: at 0x7f8ee1c75b70
[C]: in function '__index'
/home/handa/DataHandler.lua:85: in function '__init'
/home/handa/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/handa/torch/install/share/lua/5.1/torch/init.lua:87>
[C]: in function 'DataHandler'
train_match.lua:51: in main chunk
[C]: in function 'dofile'
...anda/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

anyone can help me out? thanks a lot!

Maybe clarify how to download and use the GloVe vectors?

Howdy. First, I'm having fun playing with the word-rnn package. Thanks for making it.
However, in the ReadMe, I think it might be helpful to say a bit more about how to download and use the GloVe vectors. It took me a while to figure out exactly which file I needed to download and that I had to rename it before putting it in the util/glove folder. Maybe you could say:

This code can use pre-trained word vectors from the GloVe project, located at http://nlp.stanford.edu/projects/glove/. From that site, download the file http://nlp.stanford.edu/data/glove.6B.zip. From the zip, extract the file ‘glove.6B.200d.txt’, rename it to ‘vectors.6B.200d.txt’, and place the file in word-rnn/util/glove/.

Just a small suggestion.
Cheers!

Unable to use with OpenCL

Unable to make it work with OpenCL.
It seems you need to install fbcunn, which implies having installed CUDA, and having an NVIDIA GPU.

I tried to use it with OpenCL but I get error messages :
"No Luarocks module found for fbcunn"
and after installing fbcunn:
"no CUDA-capable device is detected"

which is normal as I don't have one, but annoying as I was using the option "-opencl 1" .

Request for correct options to run in cpu mode

Hi:

I'm interested in your word version of karpathy's rnn.

I'm new to torch/lua and have the original running error free on 64 bit mint with i7 in cpu mode (GPU will be next machine)

When I run your version of the sanity check I receive the following error:

$ th train.lua -gpuid -1

/home/pixelhead/torch/install/bin/luajit: /home/pixelhead/torch/install/share/lua/5.1/trepl/init.lua:383: ./util/SharedDropout.lua:3: attempt to call field 'CudaTensor' (a nil value)
stack traceback:
[C]: in function 'error'
/home/pixelhead/torch/install/share/lua/5.1/trepl/init.lua:383: in function 'require'
train.lua:119: in main chunk
[C]: in function 'dofile'
...head/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

I'm assuming 'CudaTensor' applies to GPU use only. Is there a modification I can make in 'SharedDropout.lua' or another script for CPU use?

Looking forward to comparing the output from the original to your version.

Cheers,

module 'cudnn' not found:No LuaRocks module found for cudnn

Hi there,

Just found this rework of the awesome char-rnn and I think it would be very useful in my case.
I tried using it but after the installation I got some errors.

I did

th train.lua -gpuid -1

on a debian and got this is the error

/home/deep/torch/install/bin/luajit: /home/deep/torch/install/share/lua/5.1/trepl/init.lua:389: /home/deep/torch/install/share/lua/5.1/trepl/init.lua:389: /home/deep/torch/install/share/lua/5.1/trepl/init.lua:389: module 'cudnn' not found:No LuaRocks module found for cudnn no field package.preload['cudnn'] no file '/home/deep/.luarocks/share/lua/5.1/cudnn.lua' no file '/home/deep/.luarocks/share/lua/5.1/cudnn/init.lua' no file '/home/deep/torch/install/share/lua/5.1/cudnn.lua' no file '/home/deep/torch/install/share/lua/5.1/cudnn/init.lua' no file './cudnn.lua' no file '/home/deep/torch/install/share/luajit-2.1.0-beta1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn/init.lua' no file '/home/deep/.luarocks/lib/lua/5.1/cudnn.so' no file '/home/deep/torch/install/lib/lua/5.1/cudnn.so' no file '/home/deep/torch/install/lib/cudnn.so' no file './cudnn.so' no file '/usr/local/lib/lua/5.1/cudnn.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'error' /home/deep/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require' train.lua:37: in main chunk [C]: in function 'dofile' ...deep/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50

So I tried to do this

luarocks install cudnn

and this is the error:

`Installing https://raw.githubusercontent.com/torch/rocks/master/cudnn-scm-1.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/cudnn-scm-1.rockspec... switching to 'build' mode

Missing dependencies for cudnn:
cutorch

Using https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec... switching to 'build' mode
Cloning into 'cutorch'...
remote: Counting objects: 229, done.
remote: Compressing objects: 100% (183/183), done.
remote: Total 229 (delta 63), reused 88 (delta 44), pack-reused 0
Receiving objects: 100% (229/229), 241.20 KiB | 0 bytes/s, done.
Resolving deltas: 100% (63/63), done.
Checking connectivity... done.
Warning: unmatched variable LUALIB

jopts=$(getconf _NPROCESSORS_CONF)

echo "Building on $jopts cores"
cmake -E make_directory build && cd build && cmake .. -DLUALIB= -DLUA_INCDIR=/home/deep/torch/install/include -DCMAKE_CXX_FLAGS=${CMAKE_CXX_FLAGS} -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/home/deep/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/deep/torch/install/lib/luarocks/rocks/cutorch/scm-1" && make -j$jopts install

Building on 3 cores
-- The C compiler identification is GNU 4.9.2
-- The CXX compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found Torch7 in /home/deep/torch/install
CMake Error at /usr/share/cmake-3.0/Modules/FindCUDA.cmake:568 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
CMakeLists.txt:7 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_cutorch-scm-1-3937/cutorch/build/CMakeFiles/CMakeOutput.log".

Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec - Build error: Failed building.`

Any help with this?

Thanks

seq_length for word level

Hello

I couldn't quite tell from reading through the code -- if I'm training word level, will seq_length refer to a sequence of N words, or a sequence of N characters?

thanks

index out of range in THTensorMath.c

Hello, I am testing word-rnn with word_level = 1 parameter.

The learning process starts successfully, but after a while I encountered the following error.
There are some other threads same as this issue, they said to use ASCII file format for the input for the resolution. (karpathy/char-rnn#51)
So I checked and tried my input file to be ASCII, but still I have an error.


./util/OneHot.lua:18: index out of range at
.../torch/pkg/torch/lib/TH/generic/THTensorMath.c:141
stack traceback:
[C]: in function 'index'
./util/OneHot.lua:18: in function 'func'
.../torch/install/share/lua/5.1/nngraph/gmodule.lua:275: in function 'neteval'
.../torch/install/share/lua/5.1/nngraph/gmodule.lua:310: in function 'forward'
train.lua:260: in function 'opfunc'
.../torch/install/share/lua/5.1/optim/rmsprop.lua:32: in function 'optimizer'
train.lua:318: in main chunk
[C]: in function 'dofile'

...

How can I avoid this error? Can anybody help?

invalid data for nn scm-1

Hi. I'm pretty sure I've followed all the install instructions correctly, but when I try to train on the sample data, I get the following:

~/word-rnn $ th train.lua -data_dir data/tinyshakespeare
/home/randall/torch/install/bin/luajit: /home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: /home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: /home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: /home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: /home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: .../randall/torch/install/share/lua/5.1/luarocks/loader.lua:154: Invalid data in manifest file for module nn.THNN (invalid data for nn scm-1)
stack traceback:
[C]: in function 'error'
/home/randall/torch/install/share/lua/5.1/trepl/init.lua:383: in function 'require'
train.lua:23: in main chunk
[C]: in function 'dofile'
...dall/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

Any help is appreciated. Thanks.

inspect_checkpoint.lua doesn't seem to work with GloVe vecors

When I try to use inspect_checkpoint.lua on a checkpoint that used GloVe, I get the following:

randall@Ahmed-linux:~/word-rnn$ th inspect_checkpoint.lua cv_26_512_.3_l1_3.87/lm_lstm_epoch7.11_4.2338.t7
using CUDA on GPU 0...
/home/randall/torch/install/bin/luajit: /home/randall/torch/install/share/lua/5.1/torch/File.lua:343: unknown Torch class
stack traceback:
[C]: in function 'error'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:343: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:353: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:353: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
...
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
.../randall/torch/install/share/lua/5.1/nngraph/gmodule.lua:495: in function 'read'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/randall/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load'
inspect_checkpoint.lua:37: in main chunk
[C]: in function 'dofile'
...dall/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

error in loading glove vectors

Hi
Whenever I am trying to pass glove file with 150 dimensions,I am getting this error.

th train.lua
loading data files...

cutting off end of data so that the batches/sequences divide evenly
reshaping tensor...
data load done. Number of data batches in train: 1039, val: 184, test: 0

Vocab Size: 7027, Threshold: 10
creating an lstm with 2 layers

loading glove vectors

/opt/torch/install/bin/luajit: bad argument #2 to '?' (out of range at /opt/torch/pkg/torch/generic/Tensor.c:890)

stack traceback:
[C]: at 0x7f4ecca9b8f0
[C]: in function '__index'
./util/GloVeEmbedding.lua:65: in function 'parseEmbeddingFile'
./util/GloVeEmbedding.lua:44: in function '__init'
/opt/torch/install/share/lua/5.1/torch/init.lua:91: in function
[C]: in function 'GloVeEmbeddingFixed'
train.lua:154: in main chunk
[C]: in function 'dofile'
/opt/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405e70

My config file looks like this -

-- model params
opt.rnn_size = 300 --size of LSTM internal state
opt.num_layers = 2 --number of layers in the LSTM
opt.model = 'lstm' --(lstm,gru or rnn)
opt.wordlevel = 1 --(1 for word level 0 for char)

-- optimization
opt.learning_rate = 3e-3 --learning rate
opt.learning_rate_decay = 0.97 --learning rate decay
opt.learning_rate_decay_after = 5 --in number of epochs, when to start decaying the learning rate
opt.decay_rate = 0.95 --decay rate for rmsprop
opt.dropout = 0.35 --dropout for regularization, used after each RNN hidden layer. (0 = no dropout)
opt.seq_length = 80 --number of timesteps to unroll for
opt.batch_size = 10 --number of sequences to train on in parallel
opt.max_epochs =20 --number of full passes through the training data
opt.grad_clip = 3 --clip gradients at this value
opt.train_frac = 0.85 --fraction of data that goes into train set
opt.val_frac = 0.15 --fraction of data that goes into validation set
--test_frac will be computed as (1 - train_frac - val_frac)
opt.init_from = '' --initialize network parameters from checkpoint at this path
opt.optim = 'rmsprop' --which optimizer to use: (rmsprop|sgd|adagrad|asgd|adam)
opt.optim_alpha = 0.8 --alpha for adagrad/rmsprop/momentum/adam
opt.optim_beta = 0.999 --beta used for adam
opt.optim_epsilon = 1e-8 --epsilon that goes into denominator for smoothing

-- bookkeeping
opt.seed = 123 --torch manual random number generator seed
opt.print_every = 1 --how many steps/minibatches between printing out the loss
opt.eval_val_every = 200 --every how many iterations should we evaluate on validation data?
opt.checkpoint_dir = 'cv' --output directory where checkpoints get written
opt.savefile = 'checkpoint' --filename to autosave the checkpont to. Will be inside cv/
opt.threshold = 10 --minimum number of occurences a token must have to be included
--(ignored if -wordlevel is 0)

-- GPU/CPU
opt.backend = 'cpu' --(cpu|cuda|cl)
opt.gpuid = 0 --which gpu to use (ignored if backend is cpu)

-- Glove
opt.glove = 1 --whether or not to use GloVe embeddings
opt.embedding_file = 'util/glove/vectors.txt' --filename of the glove (or other) embedding file
opt.embedding_file_size = 150 --feature vector size of embedding file

-- Sampling Configuration

-- checkpoint
opt.checkpoint = 'lm_lstm_epoch7.56_1.2685.t7' --model checkpoint to use for sampling. If Empty, pulls last checkpoint

Error: bad argument #2 to '?'

I received an error similar to some people who described previously, but I searched and tried available debugging methods and none of them can solve this issue.

Can someone who wrote this repository help us take a look?

image

Memory Usage

You mention in the about article that you are running this on a GTX980. I'm running this on a GTX970 with 4gbs of ram, and am running out of memory using the same Glove vectors and a data set of 104mb on the default word-level options.

Are you running on a 980ti with 12gbs of memory? Is there something else at play?

Thanks, and huge thanks for making this code available.

why CudaTensor?

Hi, I am very excited to try word-rnn and I have installed torch and all that on OS X 10.11

Running the test program th train.lua -gpuid -1 I get an error connected to Cuda, which is

./util/SharedDropout.lua:3: attempt to call field 'CudaTensor' (a nil value)

Is this how it is supposed to be? any ideas for this error?

Thanks,
Roberto

PS
Please note I have a (old) GPU, some CUDA installed in my system (6.0), but wanted torch to work on CPU only, so I have "cheated" in the installation of torch and avoided it to find nvcc, so I am a bit surprised the code uses "CudaTensor" ...

Ubuntu 14.04+ required for word-rnn?

I installed every component on OS X, then noticed the note (recently added) about fbcunn being required. fbcunn claims to require Ubuntu 14.04+. Does this imply word-rnn requires Ubuntu 14.04+?

I'll try repeating the process on a debian jessie machine and see if that's close enough. If not, I can get Ubuntu, although it will be a VM so the NVIDIA graphics card requirement worry me a little.

If I'm not using the GPU, is fbcunn really needed? Do you have any other suggestions for how to make this work? Thank you!

Installing lrexlib-pcre doesn't work (Linux Mint 17.3)

I'm running Linux Mint 17.3 (a distro compatible with Ubuntu) and followed your instructions to the letter. This line doesn't work:

$ sudo luarocks install lrexlib-pcre PCRE_DIR=/lib/x86_64-linux-gnu/

Instead, you need to do the following:

$ cd ~/torch/install/bin
$ sudo apt-get install libpcre3-dev
$ sudo ./luarocks install lrexlib-pcre PCRE_DIR=/usr/ PCRE_LIBDIR=/usr/lib/x86_64-linux-gnu

[Question] Input word vector

Hello, I have a question about input word vector.
What is the reason representing input word vector as pretrained dense vector instead of one-hot vector ? Will it make the training slower? For example, when we calculate the error, the error of one-hot vector might be smaller than the dense vector?

Thanks.

Performance compared to Char-RNN

Have you seen dramatic performance improvements for text generation using the word-based approach as compared to the character-based approach. I have not. I am wondering what to expect.

Error when sampling

When running sample.lua against t7 files I frequently (but not always, depending on set temperature and seed text) come up against this error

/home/me/torch/install/bin/luajit: bad argument #2 to '?' (out of bounds at /home/me/torch/pkg/torch/lib/TH/generic/THStorage.c:178)
stack traceback:
    [C]: at 0x7f38999b08e0
    [C]: in function 'multinomial'
    sample.lua:170: in main chunk
    [C]: in function 'dofile'
    ...time/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00406670

It comes up after some text has been predicted usually, and tends to show up sooner (less text predicted) when temperature is lower. Higher temperature lets more prediction through before the error occurs

It seems a similar issue exists(ed?) in char-nn karpathy/char-rnn#28

From that thread: "The error means your data are naned. Two possible causes include the weights becoming naned during training, or the cv snapshot file being corrupted somehow."

Is there any way I can avoid this situation?

Can sample-beam.lua be modified to process at word level?

I've been experimenting your word version and I was wondering if it's possible to modify 'sample-beam.lua' to work properly at the word level?

https://github.com/pender/char-rnn/blob/master/sample-beam.lua

It improves the output results at the character level but current sampling output is concatenated at word level:

Thequickbrownfoxjumpedoverthelazydog.

Instead of

The quick brown fox jumped over the lazy dog.

I'm new to lua/torch and would appreciate any suggestions to modify the code.

Cheers,

training crashes with too big data

Hi,
I get a lot of errors like:
torch/extra/cutorch/lib/THC/THCTensorIndex.cu:321: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [37,0,0], thread: [0,0,0] Assertion srcIndex < srcSelectDimSize failed.

stack traceback:
[C]: in function 'addmm'
/home/prazek/torch/install/share/lua/5.1/nn/Linear.lua:66: in function 'func'
...e/prazek/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval'
...e/prazek/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward'
train.lua:299: in function 'opfunc'
/home/prazek/torch/install/share/lua/5.1/optim/rmsprop.lua:35: in function 'optimizer'
train.lua:358: in main chunk
[C]: in function 'dofile'
...azek/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405db0

It fails like this with glove. On the other hand without glove swaps whole machine in weird way (probably on gpu). Any guesses?

rex_pcre

Hi,

Thanks for sharing this project.

I'm running into an error after installing on a clean EC2 GPU image. Do you have any thoughts on how to resolve this error? I'm new to Lua, and although web research indicates a version compatibility problem, I haven't been able to resolve it. Thanks for any pointers.

/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:383: /home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:383: module 'rex_pcre' not found:No LuaRocks module found for rex_pcre
    no field package.preload['rex_pcre']
    no file '/home/ubuntu/.luarocks/share/lua/5.1/rex_pcre.lua'
    no file '/home/ubuntu/.luarocks/share/lua/5.1/rex_pcre/init.lua'
    no file '/home/ubuntu/torch/install/share/lua/5.1/rex_pcre.lua'
    no file '/home/ubuntu/torch/install/share/lua/5.1/rex_pcre/init.lua'
    no file './rex_pcre.lua'
    no file '/home/ubuntu/torch/install/share/luajit-2.1.0-beta1/rex_pcre.lua'
    no file '/usr/local/share/lua/5.1/rex_pcre.lua'
    no file '/usr/local/share/lua/5.1/rex_pcre/init.lua'
    no file '/home/ubuntu/.luarocks/lib/lua/5.1/rex_pcre.so'
    no file '/home/ubuntu/torch/install/lib/lua/5.1/rex_pcre.so'
    no file '/home/ubuntu/torch/install/lib/rex_pcre.so'
    no file './rex_pcre.so'
    no file '/usr/local/lib/lua/5.1/rex_pcre.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
    [C]: in function 'error'
    /home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:383: in function 'require'
    sample.lua:20: in main chunk
    [C]: in function 'dofile'
    ...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00406670

rex_pcre error

Thank you for your wonderful job!

While I am testing your script, I found errors as below.
"/torch/install/share/lua/5.1/trepl/init.lua:383: module 'rex_pcre' not found:No LuaRocks module found for rex_pcre"
(Previously I encountered the same error for 'nn' as in the below thread, but it disappeared after reinstall nn)

So I searched web and found that I need 'lrexlib-pcre', but when I try to install it (luarocks install lrexlib-pcre), I got the following error.
"Error: Could not find expected file libpcre.a, or libpcre.so, or libpcre.so.* for PCRE -- you may have to install PCRE
in your system and/or pass PCRE_DIR or PCRE_LIBDIR to the luarocks command. Example:** luarocks install lrexlib-pcre PCRE_DIR=/usr/local**"
So I tried with the additional parameter PCRE_DIR, it didn't work either.

Could anybody let me know how I can avoid this error?
Thank you in advance.

I'm always getting the same error

After the data are prepaired i get this , with the tinyshakespeare data or my data

// /luajit: bad argument #2 to '?' (start index out of bound at /... /generic/Tensor.c:984

Can't sample -- invalid arguments: CudaTensor number CudaTensor number DoubleTensor CudaTensor

Does this help? karpathy/char-rnn#21

ubuntu@ubuntu:~/word-rnn$ th sample.lua cv/lm_lstm_epoch50.00_2.2444.t7 -gpuid -1
creating an LSTM...

/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/nn/Linear.lua:55: invalid arguments: CudaTensor number CudaTensor number DoubleTensor CudaTensor
expected arguments: CudaTensor~2D [CudaTensor2D] [float] CudaTensor2D CudaTensor2D | _CudaTensor2D_ float [CudaTensor2D] float CudaTensor2D CudaTensor2D
stack traceback:
[C]: in function 'addmm'
/home/ubuntu/torch/install/share/lua/5.1/nn/Linear.lua:55: in function 'func'
...e/ubuntu/torch/install/share/lua/5.1/nngraph/gmodule.lua:275: in function 'neteval'
...e/ubuntu/torch/install/share/lua/5.1/nngraph/gmodule.lua:310: in function 'forward'
sample.lua:153: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406240
ubuntu@ubuntu:
/word-rnn$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.