🚇 Archive of daily ridership data from BART.
-
Updated
Jun 6, 2024 - Python
🚇 Archive of daily ridership data from BART.
The LARGE LANGUAGE MODEL FOR HYDROGEN STORAGE project uses advanced natural language processing to improve research efficiency. It offers concise summaries and answers questions about hydrogen storage research papers, helping users quickly understand key insights and latest advancements.
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Fine Tuning is a cost-efficient way of preparing a model for specialized tasks. Fine-tuning reduces required training time as well as training datasets. We have open-source pre-trained models. Hence, we do not need to perform full training every time we create a model.
This repository contains implementations of abstractive text summarization using RNN ,RNN with Reinforcement learning and Transformer architectures.
Cybertron: the home planet of the Transformers in Go
ECE-5424 Advanced Machine Learning Final Project - LLM Prompt Recovery task
Entity detection and normalization
This project aims to simplify and summarize scientific data , convert it to a audio format as a podcast , and create a power point presentation from the paper. This helps researchers, academics and students altogether.
Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
The cross-platform app for efficiently performing Bayesian causal inference and supervised learning tasks using tree-based models, including BCF, BART, and XBART.
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
This repository explores the use of advanced sequence-to-sequence networks and transformer models, such as BERT, BART, PEGASUS, and T5, for summarizing multi-text documents in the medical domain. It leverages extensive datasets like CORD-19 and a Biomedical Abstracts dataset from Hugging Face to fine-tune these models.
The Role of Model Architecture and Scale in Predicting Molecular Properties: Insights from Fine-Tuning RoBERTa, BART, and LLaMA
comprehensive solution for automatic spelling correction, offering functionalities for dataset preparation, model fine-tuning, deployment, and testing
Add a description, image, and links to the bart topic page so that developers can more easily learn about it.
To associate your repository with the bart topic, visit your repo's landing page and select "manage topics."