Skip to content

LCS2-IIITD/HIT-ACL2021-Codemixed-Representation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hierarchical Transformer (HIT)

This repository contains the source code for HIT (Hierarchical Transformer). It uses Fused Attention Mechanism (FAME) for learning representation learning from code-mixed texts. We evaluate HIT on code-mixed sequence classification, token classification and generative tasks.

HIT

We publish the datasets (publicly available) and the experimental setup used for different tasks.

Installation for experiments

$ pip install -r requirements.txt

Commands to run

Sentiment Analysis

$ cd experiments && python experiments_hindi_sentiment.py \
		--train_data ../data/hindi_sentiment/IIITH_Codemixed.txt \
		--model_save_path ../models/model_hindi_sentiment/

PoS (Parts-of-Speech) Tagging

$ cd experiments && python experiments_hindi_POS.py \
		--train_data '../data/POS Hindi English Code Mixed Tweets/POS Hindi English Code Mixed Tweets.tsv' \
		--model_save_path ../models/model_hindi_pos/

Named Entity Recognition (NER)

$ cd experiments && python experiments_hindi_NER.py\
		--train_data '../data/NER/NER Hindi English Code Mixed Tweets.tsv' \
		--model_save_path ../models/model_hindi_NER/

Machine Translation (MT)

$ cd experiments && python nmt.py \
		--data_path '../data/IITPatna-CodeMixedMT' \
		--model_save_path ../models/model_hindi_NMT/

Evaluation

For sentiment classification, PoS and NER classification we use macro precision, recall and F1 score to evaluate the models. For machine translation task we use BLEU, ROGUE-L and METEOR scores. To accommodate class imbalance we use weighted precision for hindi sentiment classification task.

$macro-precision = \sum_{i=1}^{C}pr_{i}$

$macro-recall = \sum_{i=1}^{C}re_{i}$

$macro-F1 = \sum_{i=1}^{C}\frac{2*pr_{i}*re_{i}}{(pr_{i} + re_{i})}$

$pr_{i}$ and $re_{i}$ are the precision and recall for class $i$, respectively.

The below table can be reproduced by using only the macro score.

Model Macro-Precision Macro-Recall Macro-F1
BiLSTM 0.894 0.901 0.909
HAN 0.889 0.906 0.905
CS-ELMO 0.901 0.903 0.909
ML-BERT 0.917 0.914 0.909
HIT 0.926 0.914 0.915

Citation

If you find this repo useful, please cite our paper:

@inproceedings{,
  author    = {Ayan Sengupta and
               Sourabh Kumar Bhattacharjee and
               Tanmoy Chakraborty and
               Md. Shad Akhtar},
  title     = {HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation},
  booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
  publisher = {Association for Computational Linguistics},
  year      = {2021},
  url       = {https://aclanthology.org/2021.findings-acl.407},
  doi       = {10.18653/v1/2021.findings-acl.407},
}

About

This repo contains the source code of HIT: A Hierarchically Fused Deep Attention Network for RobustCode-mixed Language Representation (Accepted in ACL 2021)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages