한국정보처리학회 ACK 2023, 숫자의 대소관계 파악을 위한 Explicit Feature Extraction(EFE) Reasoner 모델
-
Updated
Aug 26, 2023 - Python
한국정보처리학회 ACK 2023, 숫자의 대소관계 파악을 위한 Explicit Feature Extraction(EFE) Reasoner 모델
The capability of an AI model (here, ChatGPT in particular) was tested in the sphere of customer service by availing data from real customers who had recently had experiences dealing with the chatbots or any other language model of an established institute or company.
Code used in An Empirical study on Pre-trained Embeddings and Language Models for Bot Detection.
Emotion classification base on short texts
Can Demographic Factors Improve Text Classification? Revisiting Demographic Adaptation in the Age of Transformers
Breast Cancer Detection using Histopathology Images
A pretrained BERT model for longer reviews
we propose a novel FusionGDA model, which utilises a pre-training phase with a fusion module to enrich the gene and disease semantic representations encoded by pre-trained language models.
Natural Language Processing | BRACU
Langchain Chatbot Project utilizes Langchain and Streamlit to develop interactive chatbots. Leveraging natural language processing, the project demonstrates two approaches: a CSV-based chatbot and a Llama pretrained model.
Text Generator for Amazon Ads. Use Natural Language Generation (NLG) technology to auto-generate text. Fine-tuning of pre-trained gpt-neo models to improve upon the RNN LSTM model
Transformers Pre-Training with MLM objective — implemented encoder-only model and trained from scratch on Wikipedia dataset.
BioMedical Language Processing with ELECTRA
Sequence-to-Sequence Spanish Pre-trained Language Models
DS-TOD: Efficient Domain Specialization for Task Oriented Dialog
Code for paper: Weight-Inherited Distillation for Task-Agnostic BERT Compression
Pretraining GPT2 model on Basque language
[ACL2023-Findings] Shuo Wen Jie Zi is a new learning paradigm that enhances the semantics understanding ability of the Chinese PLMs with dictionary knowledge and structure of Chinese characters
This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation
Add a description, image, and links to the pretrained-language-model topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-language-model topic, visit your repo's landing page and select "manage topics."