Skip to content

xinyangz/TwHIN-BERT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations

PRs Welcome arXiv Huggingface-base Huggingface-large

This repo contains models, code and pointers to datasets from our paper: TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations. [PDF] [HuggingFace Models] [Video]

Overview

TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN).

TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation **tasks such as predicting user to Tweet engagement.

1. Pretrained Models

We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the HuggingFace BERT models.

Model Size Download Link (🤗 HuggingFace)
TwHIN-BERT-base 280M parameters Twitter/TwHIN-BERT-base
TwHIN-BERT-large 550M parameters Twitter/TwHIN-BERT-large

To use these models in 🤗 Transformers:

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-base')
model = AutoModel.from_pretrained('Twitter/twhin-bert-base')
inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt")
outputs = model(**inputs)

2. Benchmark Datasets

The datasets are licensed under a Creative Commons Attribution 4.0 International License.

2.1 Multilingual Hashtag Prediction

Please check the official dataset repo on HuggingFace (link) for dataset description and download.

A hydrated version of the dataset can be downloaded here. You must follow Twitter's term of service if using the hydrated dataset.

2.2 Engagement Prediction

A hydrated version of the dataset can be downloaded here. You must follow Twitter's term of service if using the hydrated dataset.

Citation

If you use TwHIN-BERT or out datasets in your work, please cite the following:

@article{zhang2022twhin,
  title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
  author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
  journal={arXiv preprint arXiv:2209.07562},
  year={2022}
}

About

Code and data release for the paper "TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published