Skip to content

Latest commit

 

History

History
22 lines (16 loc) · 789 Bytes

README.md

File metadata and controls

22 lines (16 loc) · 789 Bytes

Linformer: Self-Attention with Linear Complexity (Wang et al., 2020)

This example contains code to train Linformer models as described in our paper Linformer: Self-Attention with Linear Complexity.

Training a new Linformer RoBERTa model

You can mostly follow the RoBERTa pretraining README, updating your training command with --user-dir examples/linformer/linformer_src --arch linformer_roberta_base.

Citation

If you use our work, please cite:

@article{wang2020linformer,
  title={Linformer: Self-Attention with Linear Complexity},
  author={Wang, Sinong and Li, Belinda and Khabsa, Madian and Fang, Han and Ma, Hao},
  journal={arXiv preprint arXiv:2006.04768},
  year={2020}
}