Coder Social home page Coder Social logo

deepopinion / domain-adapted-atsc Goto Github PK

View Code? Open in Web Editor NEW
188.0 5.0 46.0 84 KB

code for our 2019 paper: "Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification"

License: MIT License

Python 100.00%

domain-adapted-atsc's Introduction

PWC

Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification

code for our 2019 paper: "Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification"

Installation

First clone repository, open a terminal and cd to the repository

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m spacy download en_core_web_sm
mkdir -p data/raw/semeval2014  # creates directories for data
mkdir -p data/transformed
mkdir -p data/models

For downstream finetuning, you also need to install torch, pytorch-transformers package and APEX (here for CUDA 10.0, which is compatible with torch 1.1.0 ). You can also perform downstream finetuning without APEX, but it has been used for the paper.

pip install scipy sckit-learn  # pip install --default-timeout=100 scipy; if you get a timeout
pip install https://download.pytorch.org/whl/cu100/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
pip install pytorch-transformers tensorboardX

cd ..
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

Preparing data for BERT Language Model Finetuning

We make use of two publicly available research datasets for the domains laptops and restaurants:

Download these datasets and put them into the data/raw folder.

To prepare the data for language model finetuning run the following python scripts:

python prepare_laptop_reviews.py
python prepare_restaurant_reviews.py
python prepare_restaurant_reviews.py --large  # takes some time to finish

Measure the number of non-zero lines to get the exact amount of sentences

cat data/transformed/restaurant_corpus_1000000.txt | sed '/^\s*$/d' | wc -l
# Rename the corpora files postfix to the actual number of sentences
# e.g  restaurant_corpus_1000000.txt -> restaurant_corpus_1000004.txt

Concatenate laptop corpus and the small restaurant corpus to create the mixed corpus (restaurants + laptops)

cd data/transformed
cat laptop_corpus_1011255.txt restaurant_corpus_1000004.txt > mixed_corpus.txt

Preparing SemEval 2014 Task 4 Dataset for Experiments

Download all the SemEval 2014 Task 4 datasets from: http://metashare.ilsp.gr:8080/repository/search/?q=semeval+2014 into

data/raw/semeval2014/

and unpack the archives. Create the preprocessed datasets using the following commands

Laptops

# laptops

# laptops without conflict label
python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Laptop_Train_v2.xml" \
--output_dir data/transformed/laptops_noconfl \
--istrain \
--noconfl

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Laptops_Test_Gold.xml" \
--output_dir data/transformed/laptops_noconfl \
--noconfl

Restaurants

# restaurants without conflict label
python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Restaurants_Train_v2.xml" \
--output_dir data/transformed/restaurants_noconfl \
--istrain \
--noconfl

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Restaurants_Test_Gold.xml" \
--output_dir data/transformed/restaurants_noconfl \
--noconfl

Mixed

# mixed without conflict label
python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Restaurants_Train_v2.xml" \
"data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Laptop_Train_v2.xml" \
--output_dir data/transformed/mixed_noconfl \
--istrain --noconfl

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Restaurants_Test_Gold.xml" \
"data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Laptops_Test_Gold.xml" \
--output_dir data/transformed/mixed_noconfl --noconfl

New: Upsampling training data for ablation study checking the influence of the labeldistribution on end-performance:

Laptops

# Laptop-upsampled->test:

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Laptop_Train_v2.xml" \
--output_dir data/transformed/laptops_noconfl_uptest \
--istrain \
--noconfl --upsample "0.534 0.201 0.265" --seed 41

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Laptops_Test_Gold.xml" \
--output_dir data/transformed/laptops_noconfl_uptest \
--noconfl

Restaurants

# Restaurants-upsampled->test:

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Restaurants_Train_v2.xml" \
--output_dir data/transformed/restaurants_noconfl_uptest \
--istrain \
--noconfl --upsample "0.650 0.175 0.175" --seed 41

python prepare_semeval_datasets.py \
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Restaurants_Test_Gold.xml" \
--output_dir data/transformed/restaurants_noconfl_uptest \
--noconfl

Release of BERT language models finetuned on a specific domain

The models should be compatible with the huggingface/pytorch-transformers module version > 1.0. The models are compressed with tar.xz and need to be decompressed before usage.

BERT Language Model Finetuning

Check the README in the "finetuning_and_classification" folder for how to finetune the BERT models on a domain specific corpus.

Down-Stream Classification

Check the README in the "finetuning_and_classification" folder for how to train the BERT-ADA models on the downstream task.

Citation

If you use this work, please cite our paper using the following Bibtex tag:

@article{rietzler2019adapt,
   title={Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification},
   author={Rietzler, Alexander and Stabinger, Sebastian and Opitz, Paul and Engl, Stefan},
   journal={arXiv preprint arXiv:1908.11860},
   year={2019}
}

domain-adapted-atsc's People

Contributors

dependabot[bot] avatar paethon avatar xelda1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

domain-adapted-atsc's Issues

New predictions.

if I want to predict the aspect sentiments of a new review, where should I edit?

can you provide finetuning code? error

the original finetune_on_pregenerated.py from hugging face/pytorch-transformers repository actually has some bugs:
optimizer = AdamW(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_optimization_steps)
actually "warmup,t_total" shouldn't be used.

Besides, I first run "Pregenerating training data", then the "Training on pregenerated data" as the readme from "huggingface," said. but the final files didn't improve the performance

from the code in the original finetune_on_pregenerated.py:
" output_model_file = os.path.join(args.output_dir, WEIGHTS_NAME)
output_config_file = os.path.join(args.output_dir, CONFIG_NAME)

torch.save(model_to_save.state_dict(), output_model_file)
model_to_save.config.to_json_file(output_config_file)
tokenizer.save_vocabulary(args.output_dir)

"
the final finetuning model files are different from your provides;
1
can you provide finetuning code? appreciated.

assert len(tokens_b) >= 1 fail

When I generate data for pretraining, it failed.

I am using the data file data/transformed/restaurant_corpus_1000000.txt. I found a line with <200d> and it generate an empty sentence.

231072 Normally I love chipotle, but the Portions are always disappointing and the ingredients never seem very fr esh.
231073 My sister hates this location's chicken but love chipotle chicken normally, not sure what they do differen tly.
231074 <200d>

Run experiments with SemEval 2016 Task 5

Hello. Great work! Thanks for publishing the code. I am very new to this field and I am trying to reproduce this work with semeval 2016 task 5 for Russian language. Could you please tell me what modifications I should make in order to prepare SemEval2016 Task 5 Russian restaurant Dataset for experiments?

Reproduce results from article

Hello.

Thanks for your work.

I tried to reproduce the results from this article https://arxiv.org/pdf/1908.11860.pdf

  1. I use scripts for down-stream task preparation - mix dataset without conflict label.

**# mixed without conflict label
python prepare_semeval_datasets.py
--files "data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Restaurants_Train_v2.xml"
"data/raw/semeval2014/SemEval-2014 ABSA Train Data v2.0 & Annotation Guidelines/Laptop_Train_v2.xml"
--output_dir data/transformed/mixed_noconfl
--istrain --noconfl

python prepare_semeval_datasets.py
--files "data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Restaurants_Test_Gold.xml"
"data/raw/semeval2014/SemEval-2014 ABSA Test Data - Gold Annotations/ABSA_Gold_TestData/Laptops_Test_Gold.xml"
--output_dir data/transformed/mixed_noconfl --noconfl**

  1. I download model BERT-ADA Joint (Restaurant + Laptops)

  2. I run script run_glue.py. I did'nt change the parameters.

But I have metrics:
acc = 0.8447098976109215
f1_macro = 0.7806977430089042

It’s possible that I missed something?

one more question about process of post-trainning

thanks for you code! one more question:

  1. how long does post-training take with regard to domain post-training data and MRC data? how many GPUs and have you using distributed? Could you provide more detail about process of trainning?

added_tokens.json file

What is the purpose of added_tokens.json file in model files? it only has {} and nothing else?

Error While executing

l/cached_train_restaurants_10mio_ep3_128_semeval2014-atsc
Traceback (most recent call last):
File "run_glue.py", line 475, in
main()
File "run_glue.py", line 428, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "run_glue.py", line 267, in load_and_cache_examples
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
File "run_glue.py", line 267, in
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
AttributeError: 'list' object has no attribute 'input_ids'

Tensorflow version

Hi,

Thank s for sharing this great job with us.

Can u plz provide us with the tensorflow version of the post-trained bert models in laptop and restaurant domains?

I tried to convert the pytorch models to tensorflow but i did not work

Thank u

Preparing laptop reviews data for BERT Language Model Finetuning

when I run python prepare_laptop_reviews.py

I am getting this:

Found 0 laptop items
Loading and Filtering Reviews
Found 0 laptop reviews
Tokenizing laptop Reviews...
Segmented 0 laptop sentences
0it [00:00, ?it/s]
Removed 0 reviews due to overlap with SemEval Laptops Dataset
Done writing to data/transformed/laptop_corpus_0.txt

And this gives me laptop_corpus_0.txt file, which is empty file

I downloaded Amazon electronics reviews (meta_Electronics.json.gz, reviews_Electronics.json.gz) and putted them into data/raw folder

I have SemEval 2014 ABSA train and test data in data/raw/semeval214/ folder

Am I missing something? Could you please help me to figure it out?

PS Preparation of restaurant reviews worked just fine and I got transformed restaurant_corpus.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.