Coder Social home page Coder Social logo

kmario23 / kenlm-training Goto Github PK

View Code? Open in Web Editor NEW
111.0 6.0 21.0 6 KB

Training an n-gram based Language Model using KenLM toolkit for Deep Speech 2

natural-language-processing language-modeling automatic-speech-recognition deep-neural-networks kenlm-toolkit kenlm language-model probabilistic-models deep-speech speech-recognition

kenlm-training's Introduction

KenLM

KenLM performs interpolated modified Kneser Ney Smoothing for estimating the n-gram probabilities.


Step-by-step guide for training an n-gram based Language Model using KenLM toolkit

1) Installing KenLM dependencies

Before installing KenLM toolkit, you should install all the dependencies which can be found in kenlm-dependencies.

For Debian/Ubuntu distro:

To get a working compiler, install the build-essential package. Boost is known as libboost-all-dev. The three supported compression options each have a separate dev package.

$ sudo apt-get install build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev

2) Installing KenLM toolkit

For this, it's suggested to use a conda or virtualenv virtual environment. For conda, you can create one using:

$ conda create -n kenlm_deepspeech python=3.6 nltk

Then activate the environment using:

$ source activate kenlm_deepspeech

Now we're ready to install kenlm. Let's first clone the kenlm repo:

$ git clone --recursive https://github.com/vchahun/kenlm.git

And then compile the LM estimation code using:

$ cd kenlm
$ ./bjam 

As a final step, optionally, install the Python module using:

$ python setup.py install

3) Training a Language Model

First let's get some training data. Here, I'll use the Bible:

$ wget -c https://github.com/vchahun/notes/raw/data/bible/bible.en.txt.bz2

Next we will need a simple preprocessing script. The reason is because:

  • the training text should be a single text/compressed file (e.g. .bz2) which has a single sentence per line.
  • it need to be tokenized and lowercased before feeding it into kenlm

So, create a simple script preprocess.py with the following lines:

import sys
import nltk

for line in sys.stdin:
    for sentence in nltk.sent_tokenize(line):
        print(' '.join(nltk.word_tokenize(sentence)).lower())

For sanity check, do:

$ bzcat bible.en.txt.bz2 | python preprocess.py | wc

And see that it works fine.

Now we can train the model. For training a trigram model with Kneser-Ney smoothing, use:

# -o means `order` which translates to the `n` in n-gram
$ bzcat bible.en.txt.bz2 |\
  python preprocess.py |\
  ./kenlm/bin/lmplz -o 3 > bible.arpa

The above command will first pipe the data thru the preprocessing script which performs tokenization and lowercasing. Next, this tokenized and lowercased text is piped to the lmplz program which performs the estimation work.

It should finish in a couple of seconds and then generate an arpa file bible.arpa. You can inspect the arpa file using something like less or more (i.e. $ less bible.arpa). In the very beginning, it should have a data section with unigram, bigram, and trigram counts followed by the estimated values.

Binarizing the model

ARPA files can be read directly. But, the binary format loads much faster and provides more flexibility. Using the binary format significantly reduces loading time and also exposes more configuration options. For these reasons, we will binarize the model using:

 $ ./kenlm/bin/build_binary bible.arpa bible.binary

Note that, unlike IRSTLM, the file extension does not matter; the binary format is recognized using magic bytes.

One can also use trie when binarizing. For this, use:

  $ ./kenlm/bin/build_binary trie bible.arpa bible.binary

Using the model (i.e. scoring sentences)

Now that we have a Language Model, we can score sentences. It's super easy to do this using the Python interface. Below is an example:

import kenlm
model = kenlm.LanguageModel('bible.binary')
model.score('in the beginning was the word')

Then, you might get a score such as:

-15.03003978729248

References:

  1. http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel
  2. http://victor.chahuneau.fr/notes/2012/07/03/kenlm.html

kenlm-training's People

Contributors

kmario23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kenlm-training's Issues

Capital letters

I am trying to make an arabic language model the problem is the characters are not understandable by many frameworks so I use a simple conversion to map each character to English characters but arabic characters are more than English so there are capital letters in the corpus that will change the meaning if lowered. Will be that an issue to use capital letters?

how to get all the functions/arguments of kenlm model

Hi, a great thanks to the post on how to install kenlm. i have build bible.arpa and able to get the scores of the sentence using model.score(), now i would like to get the suitable n number of words when a character/sentence is given, how to get that.
one more is what are all the functions available for this model, where can i get those functions or arguments

EEROR

After i run the command :bzcat bible.en.txt.bz2 | python preprocess.py | ./kenlm/bin/lmplz -o 3 > bible.arpa,

ERROR:bash: ./kenlm/bin/lmplz: No such file or directory
Traceback (most recent call last):
File "preprocess.py", line 6, in
print(' '.join(nltk.word_tokenize(sentence)).lower())
BrokenPipeError: [Errno 32] Broken pipe.

Segmentation fault (core dumped)

Hi, I got error like in title when running this command:
./kenlm/bin/build_binary trie bible.arpa bible.binary

everything was good until this step, no sign of out of memory or something like that, it crashed immediately
any suggestion?

p/s: if I cant solve this problem, can I just use the arpa file (without trie)? it give the same score with the demo sentence in the README

How to generate trie file?

Hi,
I have successful run all those steps in README and have bible.arpa bible.binary but there is no trie file
How can I generate trie? I cant find any tutorial about this

BrokenPipeError: [Errno 32] Broken pipe

Hello
Please I am following this tutorial to create my French Language model : https://github.com/kmario23/KenLM-training
But when I type this cmd :
bzcat ./data_final/vocabulary.txt.bz2 | python preprocess.py | /home/innovation/kenlm/bin/lmplz -o 3 > myvocabulary.arpa

I get the following error :

print(' '.join(nltk.word_tokenize(sentence)).lower())
BrokenPipeError: [Errno 32] Broken pipe
Erreur de segmentation (core dumped)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.