list of efficient attention modules
-
Updated
Aug 23, 2021 - Python
list of efficient attention modules
Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre-train from scratch. We investigated if multilingual models could inherit these properties by making it an Efficient Transformer (s.a. the Longformer architecture).
Abstractive and Extractive Text summarization using Transformers.
[제 13회 투빅스 컨퍼런스] YoYAK - Yes or Yes, Attention with gap-sentence for Korean long sequence
Convert pretrained RoBerta models to various long-document transformer models
Industrial Text Scoring using Multimodal Deep Natural Language Processing 🚀 | Code for IEA AIE 2022 paper
Project as part of COMP34812: Natural Language Understanding
A hyperpartisan news article classification system using BERT-based techniques. The goal was to leverage state-of-the-art transformer models like BERT, ROBERTa, and Longformer to accurately classify news articles as hyperpartisan or non-hyperpartisan.
using transformers to do text classification.
A WebApp to summarize research papers using HuggingFace Transformers.
Focus - Understanding contextual retrievability.
A summarization website that can generate summaries from either YouTube videos or PDF files.
Fine-tuned Longformer for Summarization of Machine Learning Articles
Longformer Encoder Decoder model for the legal domain, trained for long document abstractive summarization task.
An attempt of creating a model and pipeline for retrieving italian legal documents given a prompt from the user.
Factuality check of the SemRep Predications
Kaggle NLP competition - Top 2% solution (36/2060)
Add a description, image, and links to the longformer topic page so that developers can more easily learn about it.
To associate your repository with the longformer topic, visit your repo's landing page and select "manage topics."