Skip to content

This repository contains data and scripts to reproduce the results from our paper: How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets.

Helsinki-NLP/nlu-dataset-diagnostics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NLU Dataset Diagnostics

This repository contains data and scripts to reproduce the results from our paper:

Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, Jörg Tiedemann. 2022. How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets.

A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities. We present an extensive series of controlled experiments where pre-trained language models are exposed to data that have undergone specific corruption transformations. The transformations involve removing instances of specific word classes and often lead to non-sensical sentences. Our results show that performance remains high for most GLUE tasks when the models are fine-tuned or tested on corrupted data, suggesting that the models leverage other cues for prediction even in non-sensical contexts. Our proposed data transformations can be used as a diagnostic tool for assessing the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities.

Reproduce our results

Install the dependencies by running:

pip install -r requirements.txt

Run the experiments using the following command:

bash run_experiment.sh

run_experiment.sh starts a fine-tuning job for each configuration and only works in an environment where you have access to a lot of GPU instances managed with an orchestration system like SLURM. To run a single configuration, you can modify train.sh and tun it:

bash train.sh

The Python script run_corrupt_glue.py is a modified version of the run_glue.py script by Huggingface available in their Text classification examples.

Cite our paper

@misc{talman_et_al2022,
      title={How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets}, 
      author={Aarne Talman and Marianna Apidianaki and Stergios Chatzikyriakidis and J\"org Tiedemann},
      year={2022},
      eprint={2201.04467},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

This repository contains data and scripts to reproduce the results from our paper: How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published