Skip to content

hwanheelee1993/KPQA

Repository files navigation

KPQA

This repository provides an evaluation metric for generative question answering systems based on our NAACL 2021 paper KPQA: A Metric for Generative Question Answering Using Keyphrase Weights.
Here, we provide the code to compute KPQA-metric, and human annotated data.

Usage

1. Install Prerequisites

Create a python 3.6 environment and then install the requirements.

Install packages using "requirements.txt"

conda create -name kpqa python=3.6
pip install -r requirements.txt

2. Download Pretrained Model

We provide the pre-trained KPQA model in the following link.
https://drive.google.com/file/d/1pHQuPhf-LBFTBRabjIeTpKy3KGlMtyzT/view?usp=sharing
Download the "ckpt.zip" and extract it. (default directory is "./ckpt")

3. Compute Metric

You can compute KPQA-metric using "compute_KPQA.py" as follows.

python compute_KPQA.py \
  --data sample.csv \ # Target data to compute the score. Please see the "sample.csv" for file format
  --model_path $CHECKPOINT_DIR \ # Path of checkpoint directory (extract path of "ckpt.zip")
  --out_file results.csv \ # output file that has score for each question-answer pair. Please see the the sample result in "result.csv".
  --num_ref 1 \ # For usage in computing the score with multiple references.

Train KPQA (optional)

You can train your own KPQA model using the provided dataset or your own dataset using "train.py". (script for running with the default setting is "train_kpqa.sh")

Dataset

We provide human judgments of correctness for 4 datasets:MS-MARCO NLG, AVSD, Narrative QA and SemEval 2018 Task 11 (SemEval).
For MS-MARCO NLG and AVSD, we generate the answer using two models for each dataset.

For NarrativeQA and SemEval, we preprocessed the dataset from Evaluating Question Answering Evaluation.

Reference

If you find this repo useful, please consider citing:

@inproceedings{lee2021kpqa,
  title={KPQA: A Metric for Generative Question Answering Using Keyphrase Weights},
  author={Lee, Hwanhee and Yoon, Seunghyun and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Shin, Joongbo and Jung, Kyomin},
  booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
  pages={2105--2115},
  year={2021}
}