Skip to content

Latest commit

 

History

History
65 lines (50 loc) · 6.04 KB

text_classification.md

File metadata and controls

65 lines (50 loc) · 6.04 KB

Text classification

Text classification is the task of assigning a sentence or document an appropriate category. The categories depend on the chosen dataset and can range from topics.

AG News

The AG News corpus consists of news articles from the AG's corpus of news articles on the web pertaining to the 4 largest classes. The dataset contains 30,000 training and 1,900 testing examples for each class. Models are evaluated based on error rate (lower is better).

Model Error Paper / Source Code
XLNet (Yang et al., 2019) 4.49 XLNet: Generalized Autoregressive Pretraining for Language Understanding Official
ULMFiT (Howard and Ruder, 2018) 5.01 Universal Language Model Fine-tuning for Text Classification Official
CNN (Johnson and Zhang, 2016) * 6.57 Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings Official
DPCNN (Johnson and Zhang, 2017) 6.87 Deep Pyramid Convolutional Neural Networks for Text Categorization Official
VDCN (Alexis et al., 2016) 8.67 Very Deep Convolutional Networks for Text Classification Non Official
Char-level CNN (Zhang et al., 2015) 9.51 Character-level Convolutional Networks for Text Classification Non Official

* Results reported in Johnson and Zhang, 2017

DBpedia

The DBpedia ontology dataset contains 560,000 training samples and 70,000 testing samples for each of 14 nonoverlapping classes from DBpedia. Models are evaluated based on error rate (lower is better).

Model Error Paper / Source Code
XLNet (Yang et al., 2019) 0.62 XLNet: Generalized Autoregressive Pretraining for Language Understanding Official
Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) 0.64 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Official
ULMFiT (Howard and Ruder, 2018) 0.80 Universal Language Model Fine-tuning for Text Classification Official
CNN (Johnson and Zhang, 2016) 0.84 Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings Official
DPCNN (Johnson and Zhang, 2017) 0.88 Deep Pyramid Convolutional Neural Networks for Text Categorization Official
VDCN (Alexis et al., 2016) 1.29 Very Deep Convolutional Networks for Text Classification Non Official
Char-level CNN (Zhang et al., 2015) 1.55 Character-level Convolutional Networks for Text Classification Non Official

TREC

The TREC dataset is dataset for question classification consisting of open-domain, fact-based questions divided into broad semantic categories. It has both a six-class (TREC-6) and a fifty-class (TREC-50) version. Both have 5,452 training examples and 500 test examples, but TREC-50 has finer-grained labels. Models are evaluated based on accuracy.

TREC-6:

Model Error Paper / Source Code
USE_T+CNN (Cer et al., 2018) 1.93 Universal Sentence Encoder Official
ULMFiT (Howard and Ruder, 2018) 3.6 Universal Language Model Fine-tuning for Text Classification Official
LSTM-CNN (Zhou et al., 2016) 3.9 Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling
CNN+MCFA (Amplayo et al., 2018) 4 Translations as Additional Contexts for Sentence Classification
TBCNN (Mou et al., 2015) 4 Discriminative Neural Sentence Modeling by Tree-Based Convolution
CoVe (McCann et al., 2017) 4.2 Learned in Translation: Contextualized Word Vectors

TREC-50:

Model Error Paper / Source Code
Rules (Madabushi and Lee, 2016) 2.8 High Accuracy Rule-based Question Classification using Question Syntax and Semantics
SVM (Van-Tu and Anh-Cuong, 2016) 8.4 Improving Question Classification by Feature Extraction and Selection

Go back to the README