Coder Social home page Coder Social logo

postagga's Introduction

postagga

License MIT Gratipay Clojars Project

postagga logo

"But if thought corrupts language, language can also corrupt thought."

  • George Orwell, 1984

postagga is a suite of tools to assist you in generating efficient and self-contained natural language processors. You can use postagga to process annotated text samples into full fledged parsers capable of understanding "free speech" input as structured data. Ah and you'll be able to do this easily. You're welcome.

Getting postagga

you can refer postagga as a lib in your clojure project. Grab it from clojars - in your dependencies in project.clj, just add:

Clojars Project

You can also clone the project and walk around the source and models:

git clone https://github.com/turbopape/postagga.git

The models are included under the models folder.

In JVM Clojure, provided you have cloned the repository:

;; ...
 (def fr-model (load-edn "models/fr_tb_v_model.edn")) ;; for French for instance
;; ... 

We also shipped two light models as vars defined in namespaces, one for French and one for English, as for JavaScript, the artifacts size are a concern. You can use these models by requiring the two namespaces:

  (ns your-cool.bot
   (: require [postagga.en-fn-v-model :refer [en-model]] ;; for English
              [postagga.fr-tb-v-model :refer [fr-model]])) ;; for French 
   ;; ...
   

These namespaces make it easy for you to ship parsers for ClojureScript.

You can see an example on how to work with these model, all while making sure your code is cmpatible acrosss Clojure AND ClojureScript (thanks to readers' conditional) in the Test File.

How does it work?

To do its magic, postagga extracts the phrase structure of your input, and tries to find how do this structure compare to its many semantic rules and if it finds a match, where in this structure shall he extract meaningful information.

Let's study a simple example. Look at the next sentence:

"Rafik loves apples"

That is our "Natural language input"

First step in understanding this sentence is to extract some structure from it so it is easier to interpret. One common way to do this is extracting its grammatical phrase structure, which is close enough to what "function" words are actually meant to provide:

Noun Verb Noun

That was the phrase structure analysis, or as we call it POS (Part Of Speech) Tagging. These "Tags" qualify parts of the sentence, as the name imply, and will be used as a hi-fidelity mechanism to write rules for parsers of such phrases.

postagga has tools that enable you to train POS Taggers for any language you want, without relying on external libs. Actually, it does not care about the meaning of the tags at all. However, you should be consistent and clear enough when annotating your input data samples with tags: On the one hand, your parser will be more reliable and on the other hand, of course, you'll do yourself a great favour maintaining your parser.

Now comes the parser part. Actually, postagga offers a parser that needs semantic rules to be able to map a particular phrase structure into data. In our example, we know that the first Noun depicts a subject carrying out some action. This action is represented by the Verb following it.Finally, the Noun coming after the Verb will undergo this action.

postagga parsers just lets you express such rules so they can extract the data for you. You literally tell them to take the first Noun, call it Subject, take the verb, label it action and the last Noun will be the Object. and package all of it into the following data strucutre:

{:Subject "Rafik" :Action "Loves" :Object "Apples"}

Naturally, postagga can handle much more complex sentences !

postagga parsers are eventually compiled into self-contained packages, with no single third party dependency, and can easily run on servers (Clojure version) and on the browser (ClojureScript), so now your bots can really get what you're trying to tell them!

The postagga Workflow

Training a POS Tagger

First of all, you need to train a POS Tagger that can qualify parts of your natural text. postagga relies on Hidden Markov Models, computed with the Viterbi Algorithm. This algorithm makes use of a set of matrices, like what states (the POS Tags) we have, how likely do we transition from one POS tag to another, etc...

All of these constitute a model. And these are computed out of what we call an annotated text corpus. The postagga.trainer namespace is used create models out of such annoateted text corpus. To train a model, make sure you have an annotated corpus like so:

[ ;; A vector of sentences like this one:
[["-" "PONCT"] ["guerre" "NC"] ["d'" "P"] ["indochine" "NPP"]] [["-" "PONCT"] ["colloque" "NC"] ["sur" "P"] ["les" "DET"] ["fraudes" "NC"]] [["-" "PONCT"] ["dernier" "ADJ"] ["résumé" "NC"] [":" "PONCT"] ["l'" "DET"] ["\"" "PONCT"] ["affaire" "NC"] ["des" "P+D"] ["piastres" "NC"] ["\"" "PONCT"]] [["catégories" "NC"] [":" "PONCT"] ["guerre" "NC"] ["d'" "P"] ["indochine" "NPP"] ["." "PONCT"]] [["indochine" "NPP"] ["française" "ADJ"] ["." "PONCT"]] [["quatrième" "ADJ"] ["république" "NC"] ["." "PONCT"]
;; etc...
]

say you have this corpus - that is : a vector of annotated sentences in a var unsurprisingly named corpus. To train a model, just issue:

(require '[postagga.trainer :refer [train]]

(def model (train corpus)) ;;<- Beware, these can be large vars so avoid realizing all of them like printing in your REPL!!!

We processed one annotated corpus for English:

We also processed two annotated corpora for French:

We exposed two of these models as clojure namespaces so you can embed them without using the resource functionality - as it is specific to Clojure(JVM). We chose the two lightest ones, so they might not cause network issues:

The suite of tools used to process these two corpora are in the corpuscule project. Please refer to the licensing of these corpora to see to what extent you can use derived work from them.

We then trained a model out of the above English corpus:

... and two models out of these two French corpora:

Now you can use that model to assign POS tags to speech: (sentences must be fed in the form of a vector of all small-case tokens):

(require '[postagga.tagger :refer [viterbi]])

(viterbi model ["je" "suis" "heureux"])
;;=> ["CLS" "V" "ADJ"]

Patching Viterbi's Output

When the tagger encounters a word it doesn't know about- that is, was not in the corpus used to generate the viterbi models - it arbitrarily assigns a tag - more or less randomly picked by the algorithm. To somehow enhance a bit the detection, it is possible to patch the output, that is, look up a dictionary of attributes and force the tags accordingly. For instance, given you have a dictionary for proper nouns in a given language, you can patch your HMM generated POS-tags by forcing every word happening to be an entry for this dictionary to have the "NPP" tag.

We provide two dictionaries for proper nouns:

You can see how you can integrate patching in the parsing phase hereafter.

Technically, dictionaries are tries to speed up lookup for multiple entries. But this may evolve during time and should be considered as mere details implementation.

Meaning of tags

A reference to the meaning of tags is provided:

Using the tagger to parse free speech

Now you have your tagger trained, you can use a parser to drill the information from your sentences. For our last example, say you want postagga to understand how you currently feel, or how do you look... It can be done by detecting the first token as being a Subject - CLS, doing a verb - V and then having an Adjective ADJ. We want to detect who is having what adjective in our sentence. For this, we'll use the postagga.parser namespace.

First of all, require the namespace:

(require '[postagga.parser :refer [parse-tags-rules]])

Then, you'll need to specify rules for the parser. We want to grab the word tagged as CLS and the word tagged as ADJ as our infomation. Here's what the parser rules look like:

(def sample-rules [{;;Rule TB French "je suis heureux."
                    :id :sample-rule-tb-french
                    :optional-steps []
                    :rule [:qui       ;;<----- A atep
                           #{:get-value #{"CLS"}} ;;<----- A state in the parse machine
                                           ;;i.e, a set of possible sets of POS TAGS                           
                           :mood
                           #{#{"V"}}
                           #{:get-value #{"ADJ"}}]}]

This deserves some explanation before we carry on with our example.

The parser is basically a state machine. It goes through steps ([:qui, :mood]), with each step encompassing multiple states ([#{#{"V}} ...]). A state basically refers to words; it is matched with tag sets (A word can very well relate to mutiple tags, if your preferred tagger wants to !!). Different tag sets can be assigned to a state. For instance, to say that in some state we require either a Noun("NPP") or a Verb("V"), you might put:

;...
#{#{"V"} #{"NPP"}}
;...

Putting the keyword :get-value in a state tells the parser to grab the word having led to this state and to put in the yielded parse map, assigning it to a key representing the step in which that state was in. Confusing, isn't it? 😕

You'll get it with an example.

Let's say that somewhere we have:

[:qui ; <-- A step
;;...
   {:get-value #{"CLS"}} ;;<-- A state with :get-value under the :qui step
;;...
]

The value of the word that yielded the tag CLS - which is je in our example, will be reflected on the output map as an entry in some vector associated with the related step,which is qui :

{:qui ["je"]}

This is what the postagga parser is all about: you tell him where to extract information, and how you want it strctured for upstream processing.

If we had multiple states with :get-value flag on, we'll find multiple words in the corresponding entry in the output; that's why the step key is referring a vector of words in the output map.

It is also possible to say that a state can be emcountered repeatedly, using the :multi keyword. If you say in certain state:

:some-step
;...
#{:get-value :multi #{"ADJ"}
;...

And if you feed postagga the following tokenized sentence:

["il" "parait" "beau" "grand" "heureux"]

You'll find in the parse map:

{:some-step ["beau" "grand" "heureux"]}

the :optional-steps stanza tells the parser not to raise an error if a step belonging to this vector is not present.

You'll also need to tell the parser how to break down a line of text into a vector of words. We call this a tokenizer. Waiting to develop a full fledged couple with language-specific rules, we can just start by a naive one that splits strings using space characters:

;; Hey, this one works only on Clojure (JVM) version !!
(def sample-tokenizer-fn #(clojure.string/split % #"\s"))

Back to our sample. With sample-rules holding a set of rules as defined above, you can parse your sentence like so:

(def parse-result (parse-tags-rules 
                   sample-tokenizer-fn      ;; The tokenizer function.
                   (partial viterbi model)  ;; The tagger function - curried with a model
                   sample-rules             ;; The parser rules.
                   "je suis heureux"))      ;; The sentence to parse. 

And you'd have a detailed result like so:

{:errors nil ;;<- The error if any
 :result {:rule :sample-rule-tb-french ;; <- Which rule was detected 
          :data {:qui ["je"],          ;; <- The data structure drilled
                                       ;;    down from the input.
                 :mood ["heureux"]}}}

The errors will be reported as a collection mapping each rule to what step and state did the parser fail. This can be quite large, so be careful not to spit the contents of the result directly in your REPL, you can test on the :errors being nil and work with the :data value:

;; Do something with
(:data parse-result)
 

To integrate patching, as discussed above to the parsing, you can proceed as follows:

(def patch-fr-tagger-w-name ;;<- a function that wraps viterbi into a
                            ;; "patched" version
  #(patch-w-entity  0.9 % en-names-trie
                    ;; Takes a sentence, computes tags with viterbi
                    ;; and afterwards, looks if the words are close
                    ;; enough to entries in the french names
                    ;; dictionary, in which cas it will force them to
                    ;; have "NC" tag 
                    (viterbi fr-model %)
                    "NC"))
;;=> #'postagga.core-test/patch-fr-tagger-w-name                    

(-> (parse-tags-rules sample-tokenizer-fn 
                      patch-fr-tagger-w-name 
                      sample-rules "nicolas est heureux")
              (get-in [:result :data]))
;;=> {:qui["nicolas"] :mood ["heureux"]}              

Complete list of features

You can see some of this workflow (other than the training) in the Tests.

Please refer to the Changelog to see included features per version.

TODO and contributing

postagga can make great use of great contributors like you! I'll track the enhancements, bugs, features etc... in the project issues tab, and please feel free to send your PRs!

Code Of Conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

License and Credits

Copyright (c) 2017 Rafik Naccache.

Happily brought to you by fekr.

The Logo is created by my talented friend the great Chakib Daoud

Distributed under the terms of the MIT License.

postagga's People

Contributors

turbopape avatar madstap avatar

Watchers

Maksim Mikhalkou avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.