Skip to content

TharinduDR/Offenseval_2020

Repository files navigation

OffenseEval 2020

This is the work done for OffenseEval 2020 from BRUMS team.

Results

Following table will show the best results from the each model type.

Type Model Accuracy Weighted F1 Macro F1 Weighted Recall Weighted Precision (tn, fp, fn, tp)
RNN BiGRU- Fasttext 0.7828 0.7854 0.7634 0.7828 0.7901 1416 238 337 657
CNN CNN - FastText
  • Remove Words
0.7681 0.7722 0.7510 0.7681 0.7821 1364 225 389 670
Transformers Roberta - base (1) 0.7893 0.7883 0.7624 0.7893 0.7876 1490 295 263 600

About

SemEval-2020 Task 12: OffensEval 2020: Identifying and Categorizing Offensive Language in Social Media

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published