Comments (9)
Looks like no training has happened at all.
Can you look into the more detailed log exp/train_phn_l2_c200/log/tr.iter1.log?
from eesen.
I couldn't get any clue...
I have attached the log file...
from eesen.
I modified the transcripts and now it is working but the accuracy keeps constant
NO updates in network..
LOG (copy-feats:main():copy-feats.cc:100) Copied 184 feature matrices.
Initializing model as exp/train_phn_l5_c200/nnet/nnet.iter0
TRAINING STARTS [2016-Feb-12 14:08:58]
[NOTE] TOKEN_ACCURACY refers to token accuracy, i.e., (1.0 - token_error_rate).
EPOCH 1 RUNNING ... ENDS [2016-Feb-12 14:17:41]: lrate 4e-05, TRAIN ACCURACY 3.7802%, VALID ACCURACY 4.5504%
EPOCH 2 RUNNING ... ENDS [2016-Feb-12 14:26:34]: lrate 4e-05, TRAIN ACCURACY 3.7802%, VALID ACCURACY 4.5504%
EPOCH 3 RUNNING ...
\
from eesen.
Training simply blows up, as shown by -1e+30 objective.
This is likely to happen if a label (e.g., 100) from label sequences exceeds the range of softmax layer (e.g., 0 - 90). You may check if the number of labels truly corresponds to the size of the softmax layer.
from eesen.
I checked the transcript every thing is ok.I tried with tedlium corpus ( with 200 waves) , I get the same error...
I have attached log of tedlium corpus....kindly help me to resolve..
from eesen.
are you using CPU rather than GPU for training? Eesen does not support CPU training.
from eesen.
I have configured for GPU during installation and I use NIVIDEA NVS 300 card.
How to check whether it runs on cpu or gpu?
from eesen.
If you had GPU running, you should have seen something like the following at the beginning of the log file.
LOG (train-ctc-parallel:SelectGpuIdAuto():cuda-device.cc:262) Selecting from 1 GPUs
LOG (train-ctc-parallel:SelectGpuIdAuto():cuda-device.cc:277) cudaSetDevice(0): Tesla K80 free:12163M, used:124M, total:12287M, free/total:0.989858
LOG (train-ctc-parallel:SelectGpuIdAuto():cuda-device.cc:310) Selected device: 0 (automatically)
LOG (train-ctc-parallel:FinalizeActiveGpu():cuda-device.cc:194) The active GPU is [0]: Tesla K80 free:12149M, used:138M, total:12287M, free/total:0.988719 version 3.7
LOG (train-ctc-parallel:PrintMemoryUsage():cuda-device.cc:334) Memory used: 0 bytes.
from eesen.
yes, the problem was with GPU . now GPU is configured correctly and it is working ..
Thanks a lot...
from eesen.
Related Issues (19)
- No gradient clipping in parallel version lstm training? HOT 2
- LDC HOT 13
- the installation of eesen HOT 3
- Cuda memory HOT 2
- the output of LSTM HOT 9
- where is prune-lm? HOT 2
- getting different results with same setup
- different Token Accuracy on same sets HOT 2
- lattice
- Lattice Decoding Error
- tedlium example training error
- Did you have experience with Obj = nan, TokenAcc = nan%? HOT 2
- few questions HOT 1
- SVN checkout error
- BLAS alternatives HOT 1
- online decoding HOT 1
- gpucompute: cuda-matrix.cc:1075:57: error: ‘cuda_apply_heaviside’ was not declared in this scope HOT 2
- Softmax probabilty vs Number of frames HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from eesen.