Code of paper:
What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding (EMNLP 2020)
python3 absolute.py
python3 relative.py
torch
sklearn
python-box
tqdm
-
cd classification -
Download dataset: link
-
Configurate
data_pathandtaskinconfig.yaml -
Run
python3 main.py
torch
sklearn
transformers
1.cd lm
-
Download dataset: link
-
Configurate
TRAIN_FILE,TEST_FILEandOUTPUTinwikitext2.shandwikitext103.sh -
Run
bash wikitext2.sh
bash wikitext103.sh
torch
sklearn
fairseq==0.9.0
cd nmt- Prepapre dataset
bash prepare-multi30k.sh
- Train models
bash train_multi30k.sh
- Generate translation & evaluation
bash generate_multi30k.sh
Main paper to be cited
@inproceedings{wang2020position,
title={What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding}
author={Wang, Yu-An and Chen, Yun-Nung},
booktitle={EMNLP 2020},
year={2020}
}