Code repository for the paper Xu at SemEval2022 Task 4: pre-BERT Neural Network Methods vs post-BERT RoBERTa Approach for Patronizing and Condescending Language Detection.
-
Updated
Jul 19, 2022 - Jupyter Notebook
Code repository for the paper Xu at SemEval2022 Task 4: pre-BERT Neural Network Methods vs post-BERT RoBERTa Approach for Patronizing and Condescending Language Detection.
Transformers Pre-Training with MLM objective — implemented encoder-only model and trained from scratch on Wikipedia dataset.
[EMNLP'23 Oral] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue PyTorch Implementation
Official implementation of the ACL Findings 2023 paper: Multimedia Generative Script Learning for Task Planning
The official repository for ACL 2024 paper "Learning Sufficient Representations via Conditional Information Flow Maximization"
CoBERTa is a pre-trained models are the pre-trained language models for Comment/ Social Vietnamese datasets.
한국정보처리학회 ACK 2023, 숫자의 대소관계 파악을 위한 Explicit Feature Extraction(EFE) Reasoner 모델
DS-TOD: Efficient Domain Specialization for Task Oriented Dialog
Code for the ACL2022 paper "C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References"
Official repository of Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization [EMNLP'22 Findings]
The capability of an AI model (here, ChatGPT in particular) was tested in the sphere of customer service by availing data from real customers who had recently had experiences dealing with the chatbots or any other language model of an established institute or company.
BioMedical Language Processing with ELECTRA
The source code used for paper "PIEClass: Weakly-Supervised Text Classification with Prompting and Noise-Robust Iterative Ensemble Training", published in EMNLP 2023.
Sequence-to-Sequence Spanish Pre-trained Language Models
Pretraining GPT2 model on Basque language
The official repo for "VER: Unifying Verbalizing Entities and Relations" (Findings of EMNLP '23)
The implementation for "Can Language Models Be Specific? How?" (ACL 2023 Findings)
CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing (ACL 2022)
Emotion classification base on short texts
RECKONING is a bi-level learning algorithm that improves language models' reasoning ability by folding contextual knowledge into parametric knowledge through back-propagation.
Add a description, image, and links to the pretrained-language-model topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-language-model topic, visit your repo's landing page and select "manage topics."