Coder Social home page Coder Social logo

onurgu / ner-tagger-dynet Goto Github PK

View Code? Open in Web Editor NEW
23.0 7.0 10.0 236 KB

See http://github.com/onurgu/joint-ner-and-md-tagger This repository is basically a Bi-LSTM based sequence tagger in both Tensorflow and Dynet which can utilize several sources of information about each word unit like word embeddings, character based embeddings and morphological tags from an FST to obtain the representation for that specific word unit.

License: MIT License

Python 30.39% Perl 2.97% sed 59.07% Shell 5.46% Awk 0.19% Jupyter Notebook 1.91%
named-entity-recognition tensorflow reimplementation sequence-tagger neural-networks bi-lstm dynet

ner-tagger-dynet's Introduction

See updated version at http://github.com/onurgu/joint-ner-and-md-tagger

Neural Tagger for MD and NER

This repo contains the software that was used to conduct the experiments reported in our article titled "Improving Named Entity Recognition by Jointly Learning to Disambiguate Morphological Tags" [1] to be presented at COLING 2018.

Training and testing

We recommend using the helper scripts for conducting experiments. The scripts named helper-script-* run the experiments in the paper with given hyper parameters.

bash ./scripts/helper-script-to-run-the-experiment-set-small-sizes.sh campaing_name | parallel -j6

For the reporting part to work, you should set up a working sacred environment, which is very easy if you choose a filesystem based storage. You can find an example of this in the helper script found in ./scripts/TRUBA folder.

Tag sentences

This project do not have a designated tagger script for now but you can obtain the output in eval_dir. You should provide the text in tokenized form in CoNLL format. The script will tag both the development and testing files and produce files in ./evaluation/temp/eval_logs/. If you need this and want to contribute by coding and sharing it with the project, you are welcome.

Replication of the experiments

To reproduce the experiments reported with our model, you can use Docker and build a replica of our experimentation environment.

To build:

docker build -t yourimagename:yourversion .

To run:

docker run -ti -v `pwd`/dataset:/opt/ner-tagger-dynet/dataset -v `pwd`/models:/opt/ner-tagger-dynet/models yourimagename:yourversion python train.py --train dataset/gungor.ner.train.small --dev dataset/gungor.ner.dev.small --test dataset/gungor.ner.test.small --word_dim 300 --word_lstm_dim 200 --word_bidirect 1 --cap_dim 100 --crf 1 --lr_method=adam --maximum-epochs 50 --char_dim 200 --char_lstm_dim 200 --char_bidirect 1 --overwrite-mappings 1 --batch-size 1

You should create or set permissions accordingly for `pwd`/dataset and `pwd`/models.

References

[1] Gungor, O., Uskudarli, S., Gungor, T., Improving Named Entity Recognition by Jointly Learning to Disambiguate Morphological Tags, 2018, COLING 2018, 19-25 August, (to appear).

ner-tagger-dynet's People

Contributors

cgl avatar onurgu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ner-tagger-dynet's Issues

About the meaning of outputs

Firstly, I would like to say thanks to you for your sharing.
When I ran training file: 'train_tensorflow.py' I got negative numbers that I don't understand what they mean?
Starting epoch 5...
Reshuffling
n_batches: 29
bucket_id: 7
-8.567176 -7.873434 -7.972278 n_batches: 29
Reshuffling
bucket_id: 5
-7.769209 -7.041025 -6.468853 Reshuffling
n_batches: 29

Could you mind explaining for me? where is the accuracy, Precision, Recall, F1 on training set, dev set?
Thank you in advance!

P.s: I also ran source code given by Lample et al, and here is output:
processed 8843 tokens with 388 phrases; found: 332 phrases; correct: 243.
accuracy: 96.57%; precision: 73.19%; recall: 62.63%; FB1: 67.50
ORG: precision: 73.09%; recall: 64.54%; FB1: 68.55 249
PER: precision: 73.49%; recall: 57.55%; FB1: 64.55 83
ID NE Total O I-ORG B-ORG B-PER I-PER Percent
0 O 8015 7942 28 27 9 9 99.089
1 I-ORG 357 66 277 7 0 7 77.591
2 B-ORG 282 63 13 194 12 0 68.794
3 B-PER 106 15 2 21 62 6 58.491
4 I-PER 83 5 13 0 0 65 78.313
8540/8843 (96.57356%)
Score on dev: 67.46000
Score on test: 67.50000
New best score on dev.

creation of eval log folder

./evaluation/temp/eval_logs is not created automatically which stops the training
with an error message something similar to below during evaluation:
IOError: [Errno 2] No such file or directory: './evaluation/temp/eval_logs/dev.eval.1212048.epoch-0000.output'

When I train first model, It does not create mappings.pkl

in model_tensorflow.save_mappings

if overwrite_mappings is off(0) and the mapping file does not exists,
It does not create new one.

So the flow must be like

if not exists(mapping_file):
    create_one()
else:
    if overwrite_option:
        create_one()
    else:
        #handle_unexpected_behaviour
        #or dismiss?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.