Coder Social home page Coder Social logo

negar-foroutan / multilingual-code-switched-reasoning Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 2.0 42 KB

[EMNLP 2023 - Findings] Breaking the Language Barrier: Improving Cross-Lingual Reasoning with Structured Self-Attention

Home Page: https://arxiv.org/abs/2310.15258

Python 100.00%
multilingual-language-model multilingual-nlp code-switched-reasoning cross-lingual-reasoning

multilingual-code-switched-reasoning's Introduction

Breaking the Language Barrier: Improving Cross-Lingual Reasoning with Structured Self-Attention

License: MIT

Code for the paper: "Breaking the Language Barrier: Improving Cross-Lingual Reasoning with Structured Self-Attention" [EMNLP 2023 - Findings]

Requirements

We recommend using a conda environment for running the scripts. You can run the following commands to create the conda environment (assuming CUDA11.3):

conda create -n cross-lingual-attention python=3.10.9
conda activate cross-lingual-attention
pip install -r requirements.txt
conda install faiss-cpu -c pytorch

Data

Translations of RuleTaker and LeapOfThought (LoT) datasets can be downloded from here. For the LoT dataset, the files start with "randomized" are the modified version of the data we used for our experiments (randomly negate 50% of statements). For more information on this dataset, please check Appendix A.2 of the paper.

Usage

In all the following experiments, you can either pass the arguments directly to the python script or specify the arguments in a JSON file and pass the file path to the script.

Fine-tuning a model for RuleTaker or LoT datasets:

python standard_finetuning.py \
	--output_dir ruletaker_finetuning \
    --data_base_dir data \ # Assuming each language has a folder inside this
	--model_type mbert \
	--do_train \
	--do_eval \
	--per_device_train_batch_size 32 \
	--per_device_eval_batch_size 32 \
    --dataset_name ruletaker \
    --rule_taker_depth_level 0 \
	--num_train_epochs 4 \
    --learning_rate 1e-5 \
    --save_strategy epoch \
    --evaluation_strategy steps\
    --logging_steps 1000 \
    --train_language en \
    --train_second_language fr \ # In case of fine-tuning on two datasets
	--overwrite_output_dir \
	--seed 57

Fine-tuning a model using the proposed cross-lingual-aware attention mechanism (for RuleTaker or LoT datasets):

python finetuning_sep_cross_lingual_attention.py \
	--output_dir ruletaker_finetuning_cross_query \
    --data_base_dir data \ # assuming each language has a folder inside this
	--model_type mbert \
	--do_train \
	--do_eval \
	--per_device_train_batch_size 32 \
	--per_device_eval_batch_size 32 \
	--evaluate_during_training \
    --dataset_name ruletaker \
    --rule_taker_depth_level 0 \
    --bitfit \
	--num_train_epochs 35 \
    --learning_rate 4e-4 \
    --warmup_ratio 0.1 \
    --save_strategy epoch \
    --evaluation_strategy steps\
    --logging_steps 1000 \
    --mono_alpha  1.0 \
    --cross_mono_alpha  0.3 \
    --cross_alpha  0.3 \
    --mono_eval_alpha  1.0 \
    --cross_mono_eval_alpha  0.0 \
    --cross_eval_alpha  0.0 \
    --language1 en \
    --language2 en-fr \
    --load_query_pretrained \
	--overwrite_output_dir \
	--seed 57

Pre-training the cross-lingual query matrix:

python pretrain_sep_cross_lingual_attention.py \
	--output_dir pretrain_cross_lingual_query \
    --model_type mbert \
    --do_train \
	--do_eval \
    --mlm \
    --evaluate_during_training \ 
    --pad_to_max_length \
    --dataset_name xnli \
    --data_language_pairs en-fr;en-de;en-es;en-ru;en-ar;en-zh \
    --mono_alpha  0.0 \
    --cross_alpha  0.0 \
    --mono_eval_alpha  0.0 \
    --cross_eval_alpha  0.0 \
    --per_device_train_batch_size 8 \
	--per_device_eval_batch_size 8 \
    --learning_rate 2e-5 \
    --max_steps 500000 \
    --logging_steps 5000 \
    --save_steps 10000 \
    --freeze_the_rest \
	--overwrite_output_dir \
	--seed 57

Notes:

  • Multi-GPU is currently not supprted for the scripts that use the proposed cross-lingual query matrix (i.e., pretrain_sep_cross_lingual_attention.py, and finetuning_sep_cross_lingual_attention.py).

Citation

If you use this code for your research, please cite our paper:

@article{foroutan2023breaking,
  title={Breaking the Language Barrier: Improving Cross-Lingual Reasoning with Structured Self-Attention},
  author={Foroutan, Negar and Banaei, Mohammadreza and Aberer, Karl and Bosselut, Antoine},
  booktitle={Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  url={https://arxiv.org/abs/2310.15258}
  year={2023}
}

multilingual-code-switched-reasoning's People

Contributors

negar-foroutan avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.