This project provides the implementation of the models described on the paper Linguistic realisation as machine translation: Comparing different MT models for AMR-to-text generation.
To obtain the results described at the paper, the following scripts need to be executed: build.sh, parallel.sh, pbmt.sh/nmt.sh, realisation.sh and evaluation.sh.
This script aims to train the models for compression and preordering both for lexicalised as delexicalised data. Before run the script, update the following variables:
- corpus: path for the aligned parallel corpus splitted in training, dev and test sets
- path_lex: path where the lexicalised compressor and preordering model should be saved
- path_delex: path where the delexicalised compressor and preordering model should be saved
This script aims to create a parallel corpus among preprocessed AMRs and their respective English texts. Several instances of the corpus are created following the different combinations of AMR preprocessing techniques applied. For more information about these combinations, check Table 1 of the paper, where they are all depicted. Before run the script, update the following variables:
- corpus: path for the aligned parallel corpus splitted in training, dev and test sets
- path_lex: path where the lexicalised compressor and preordering model should be saved
- path_delex: path where the delexicalised compressor and preordering model should be saved
Run this script to train, tune and translate using the Phrase-Based Machine Translation paper described. Before run the script, update the following variables:
- mosesdecoder: path for moses
- mgiza: path for mgiza
- lm: path for a 5-gram language model trained with KenLM
- data: path where parallel corpora resulted from parallel.sh are
Look that Moses, MGiza and a 5-gram KenLM language model are required to execute this step.
Run this script to realise the references of the delexicalised outputs. Before run the script, ipdate the following variables:
- data: path for the experiment