Skip to content
This repository has been archived by the owner on Sep 22, 2020. It is now read-only.

Latest commit

 

History

History
60 lines (46 loc) · 3.08 KB

README.md

File metadata and controls

60 lines (46 loc) · 3.08 KB

Distractor-Generation-RACE

Dataset for our AAAI 2019 paper: Generating Distractors for Reading Comprehension Questions from Real Examinations https://arxiv.org/abs/1809.02768

If you use our data or code, please cite our paper as follows:

@inproceedings{gao2019distractor,
	title="Generating Distractors for Reading Comprehension Questions from Real Examinations",
	author="Yifan Gao and Lidong Bing and Piji Li and Irwin King and Michael R. Lyu",
	booktitle="AAAI-19 AAAI Conference on Artificial Intelligence",
	year="2019"
}

Distractor Generation: A New Task

In the task of Distractor Generation (DG), we aim at generating reasonable distractors (wrong options) for multiple choices questions (MCQs) in reading comprehension.

The generated distractors should:

  • be longer and semantic-rich
  • be semantically related to the reading comprehension question
  • not be paraphrases of the correct answer option
  • be grammatically consistent with the question, especially for questions with a blank in the end

Here is an example from our dataset. The question, options and their relevant sentences in the article are marked with the same color.

Real-world Applications

  • Help the preparation of MCQ reading comprehension datasets
    • The existence of distractors fail existing content-matching SOTA reading comprehension on MCQs like RACE dataset
    • Large datasets can boost the performance of MCQ reading comprehension systems
  • Alleviate instructors' workload in designing MCQs for students
    • Poor distractor options can make the questions almost trivial to solve
    • Reasonable distractors are time-consuming to design

Processed Dataset

The data used in our paper is transformed from RACE Reading Comprehension Dataset. We prune the distractors which have no semantic relevance with the article or require some world knowledge to generate.

The processed data is put in the /data/ directory. Please uncompress it first.

Here is the dataset statistics.

Note Due to a bug in spacy, actually more examples in RACE dataset should be filtered by our rule. But we were not aware of this issue when we did this project. Here we release both the original dataset race_train/dev/test_original.json and the updated dataset race_train/dev/test_updated.json. Because of the smaller dataset size, the performance will be worse if the model is trained on the updated dataset.

Code

Our implementation is based on OpenNMT-py.

Preprocess

GloVe vectors are required, please download glove.840B.300d first. run scripts/preprocess.sh for preprocessing and getting corresponding word embedding.

Train, Generate & Evaluate

run scripts/train.sh for training, scripts/generate.sh for generation and evaluation