Dataset for our AAAI 2019 paper: Generating Distractors for Reading Comprehension Questions from Real Examinations https://arxiv.org/abs/1809.02768
Currently the code is available upon request. The code is shared for research perpose only. Please do NOT email me if you are working on a course project.
If you use our data or code, please cite our paper as follows:
@inproceedings{gao2019distractor,
title="Generating Distractors for Reading Comprehension Questions from Real Examinations",
author="Yifan Gao and Lidong Bing and Piji Li and Irwin King and Michael R. Lyu",
booktitle="AAAI-19 AAAI Conference on Artificial Intelligence",
year="2019"
}
In the task of Distractor Generation (DG), we aim at generating reasonable distractors
(wrong options) for multiple choices questions (MCQs) in reading comprehension.
The generated distractors should:
- be longer and semantic-rich
- be semantically related to the reading comprehension question
- not be paraphrases of the correct answer option
- be grammatically consistent with the question, especially for questions with a blank in the end
Here is an example from our dataset. The question, options and their relevant sentences in the article are marked with the same color.
- Help the preparation of MCQ reading comprehension datasets
- The existence of distractors fail existing content-matching SOTA reading comprehension on MCQs like RACE dataset
- Large datasets can boost the performance of MCQ reading comprehension systems
- Alleviate instructors' workload in designing MCQs for students
- Poor distractor options can make the questions almost trivial to solve
- Reasonable distractors are time-consuming to design
The data used in our paper is transformed from RACE Reading Comprehension Dataset.
We prune the distractors which have no semantic relevance
with the article or require some world knowledge
to generate.
The processed data is put in the /data/
directory. Please uncompress it first.
Here is the dataset statistics.
Note
Due to a bug in spacy, the released data is different from that we used when submitting the paper. Experimental results can be different.