Coder Social home page Coder Social logo

okc13 / kg4zeroshottext Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jingqingz/kg4zeroshottext

0.0 1.0 0.0 1.77 MB

Source code of the paper 'Integrating Semantic Knowledge to Tackle Zero-shot Text Classification. NAACL-HLT 2019. '

Python 52.74% Jupyter Notebook 47.26%

kg4zeroshottext's Introduction

Integrating Semantic Knowledge to Tackle Zero-shot Text Classification. NAACL-HLT 2019. Oral (Accepted)

Jingqing Zhang, Piyawat Lertvittayakumjorn, Yike Guo

Jingqing and Piyawat contributed equally to this project.

Paper link: arXiv:1903.12626

Contents

  1. Abstract
  2. Code
  3. Acknowledgement
  4. Citation

Abstract

Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification. Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem. In this paper, we propose a two-phase framework together with data augmentation and feature augmentation to solve this problem. Four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and a general knowledge graph) are incorporated into the proposed framework to deal with instances of unseen classes effectively. Experimental results show that each and the combination of the two phases clearly outperform baseline and recent approaches in classifying real-world texts under the zero-shot scenario.

Code

Checklist

In order to run the code, please check the following issues.

  • Package dependencies:
  • Download original datasets
  • Check config.py and update the locations of data files accordingly. The config.py also defines the locations of intermediate files.
  • The intermediate files already provided in this repo
    • classLabelsDBpedia.csv: A summary of classes in DBpedia and linked nodes in ConceptNet.
    • classLabels20news.csv: A summary of classes in 20news and linked nodes in ConceptNet.
    • Random selection of seen/unseen classes in DBpedia with unseen rate 0.25 and 0.5.
    • Random selection of seen/unseen classes in 20news with unseen rate 0.25 and 0.5.
    • Note: seen/unseen classes were randomly selected for 10 times. You may randomly generate another 10 groups of seen/unseen classes.
  • Some intermediate files have been uploaded to supplementary.
  • Other intermediate files should be generated automatically when they are needed.

Please feel free to raise an issue if you find any difficulty to run the code or get the intermediate files.

How to perform data augmentation

An example:

python3 topic_translation.py \
        --data dbpedia \
        --nott 100

The arguments of the command represent

  • data: Dataset, either dbpedia or 20news.
  • nott: No. of original texts to be translated into all classes except the original class. If nott is not given, all the texts in the training dataset will be translated.

The location of the result file is specified by config.{zhang15_dbpedia, news20}_train_augmented_aggregated_path.

How to perform feature augmentation / create v_{w,c}

An example:

python3 kg_vector_generation.py --data dbpedia 

The argument of the command represents

  • data: Dataset, either dbpedia or 20news.

The locations of the result files are specified by config.{zhang15_dbpedia, news20}_kg_vector_dir.

How to train / test Phase 1

  • Without data augmentation: an example
python3 train_reject.py \
        --data dbpedia \
        --unseen 0.5 \
        --model vw \
        --nepoch 3 \
        --rgidx 1 \
        --train 1
  • With data augmentation: an example
python3 train_reject_augmented.py \
        --data dbpedia \
        --unseen 0.5 \
        --model vw \
        --nepoch 3 \
        --rgidx 1 \
        --naug 100 \
        --train 1

The arguments of the command represent

  • data: Dataset, either dbpedia or 20news.
  • unseen: Rate of unseen classes, either 0.25 or 0.5.
  • model: The model to be trained. This argument can only be
    • vw: the inputs are embedding of words (from text)
  • nepoch: The number of epochs for training
  • train: In Phase 1, this argument does not affect the program. The program will run training and testing together.
  • rgidx: Optional, Random group starting index: e.g. if 5, the training will start from the 5th random group, by default 1. This argument is used when the program is accidentally interrupted.
  • naug: The number of augmented data per unseen class

The location of the result file (pickle) is specified by config.rejector_file. The pickle file is actually a list of 10 sublists (corresponding to 10 iterations). Each sublist contains predictions of each test case (1 = predicted as seen, 0 = predicted as unseen).

How to train / test the traditional classifier in Phase 2

An example:

python3 train_seen.py \
        --data dbpedia \
        --unseen 0.5 \
        --model vw \
        --sepoch 1 \
        --train 1

The arguments of the command represent

  • data: Dataset, either dbpedia or 20news.
  • unseen: Rate of unseen classes, either 0.25 or 0.5.
  • model: The model to be trained. This argument can only be
    • vw: the inputs are embedding of words (from text)
  • sepoch: Repeat training of each epoch for several times. The ratio of positive/negative samples and learning rate will keep consistent in one epoch no matter how many times the epoch is repeated.
  • train: For the traditional classifier, this argument does not affect the program. The program will run training and testing together.
  • rgidx: Optional, Random group starting index: e.g. if 5, the training will start from the 5th random group, by default 1. This argument is used when the program is accidentally interrupted.
  • gpu: Optional, GPU occupation percentage, by default 1.0, which means full occupation of available GPUs.
  • baseepoch: Optional, you may want to specify which epoch to test.

How to train / test the zero-shot classifier in Phase 2

An example:

python3 train_unseen.py \
        --data 20news \
        --unseen 0.5 \
        --model vwvcvkg \
        --ns 2 --ni 2 --sepoch 10 \
        --rgidx 1 --train 1

The arguments of the command represent

  • data: Dataset, either dbpedia or 20news.
  • unseen: Rate of unseen classes, either 0.25 or 0.5.
  • model: The model to be trained. This argument can be (correspond with Table 6 in the paper)
    • kgonly: the inputs are the relationship vectors which are extracted from knowledge graph (KG).
    • vcvkg: the inputs contain the embedding of class labels and the relationship vectors.
    • vwvkg: the inputs contain the embedding of words (from text) and the relationship vectors.
    • vwvc: the inputs contain the embedding of words and class labels.
    • vwvcvkg: all three kinds of inputs mentioned above.
  • train: 1 for training, 0 for testing.
  • sepoch: Repeat training of each epoch for several times. The ratio of positive/negative samples and learning rate will keep consistent in one epoch no matter how many times the epoch is repeated.
  • ns: Optional, Integer, the ratio of positive and negative samples, the higher the more negative samples, by default 2.
  • ni: Optional, Integer, the speed of increasing negative samples during training per epoch, by default 2.
  • rgidx: Optional, Random group starting index: e.g. if 5, the training will start from the 5th random group, by default 1. This argument is used when the program is accidentally interrupted.
  • gpu: Optional, GPU occupation percentage, by default 1.0, which means full occupation of available GPUs.
  • baseepoch: Optional, you may want to specify which epoch to test.

Acknowledgement

We would like to thank Douglas McIlwraith, Nontawat Charoenphakdee, and three anonymous reviewers for helpful suggestions. Jingqing and Piyawat would also like to thank the support from LexisNexis® Risk Solutions HPCC Systems® academic program and Anandamahidol Foundation, respectively.

We would also like to thank @Nan Guoshun for the bugs reported.

Citation

@inproceedings{zhangkumjornZeroShot,
    title = "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
    author = "Zhang, Jingqing and
    Lertvittayakumjorn, Piyawat and 
    Guo, Yike",
    booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Long Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, USA",
    publisher = "Association for Computational Linguistics",
}

kg4zeroshottext's People

Contributors

jingqingz avatar plkumjorn avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.