Skip to content

cooelf/CompassMTL

Repository files navigation

Task Compass: Scaling Multi-task Pre-training with Task Prefix

This repository contains the source code for the EMNLP 2022 (Findings) paper: Task Compass: Scaling Multi-task Pre-training with Task Prefix [PDF]. In this paper, we propose a task prefix guided multi-task pre-training framework (CompassMTL) to explore the relationships among tasks. CompassMTL is based on the DeBERTa architecture, trained with 40 natural language understanding tasks. Please refer more details in our paper.

Environment

  • numpy
  • torch
  • transformers==4.17.0
  • wandb
  • sentencepiece
  • sklearn
  • datasets

Data

Download data from datasets

Instructions

Training:

bash run_train.sh

evaluate:

bash run_evaluate.sh

Commonsense Reasoning Models (ANLI and HellaSwag)

We provide the models and outputs for the ANLI and HellaSwag Commonsense Reasoning tasks:

Our sinlge models for ANLI and HellaSwag are available at reasoning_models

The outputs can be found at model_outputs.

Reference

Please kindly cite this paper in your publications if it helps your research:

@inproceedings{zhang2022task,
  title={Task Compass: Scaling Multi-task Pre-training with Task Prefix},
  author={Zhang, Zhuosheng and Wang, Shuohang and Xu, Yichong and Fang, Yuwei and Yu, Wenhao and Liu, Yang and Zhao, Hai and Zhu, Chenguang and Zeng, Michael},
  booktitle={arXiv preprint arXiv:2210.06277},
  year={2022}
}

About

Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published