A gradio frontend for Google's Flan-T5 Large language model, can also be adjusted for other sizes.
-
Updated
Feb 20, 2023 - Python
A gradio frontend for Google's Flan-T5 Large language model, can also be adjusted for other sizes.
A template Next.js app for running language models like FLAN-T5 with Replicate's API
Use AI to personify books, so that you can talk to them 🙊
This is a project done for an assessment. I found it to be interesting and decided to share this. The idea is to create a scraper to scrap the Wikipedia page and generate question and answers
The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.
The Summarizer Module of the TURB
In this implementation, using the Flan T5 large language model, we performed the Text Classification task on the IMDB dataset and obtained a very good accuracy of 93%.
Tutorial para treino de um modelo baseado Flan-T5 usando Flax no GCP-TPU
Socratic models for multimodal reasoning & image captioning
This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.
Research POC on the mitigation of bias in large language models (FLAN-T5 and Bloomz) through fine-tuning.
Multiple LLM based models for NLP tasks. Starting with Question answering on custom data
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
Code and data for the StarSem 2023 paper "Arithmetic-Based Pretraining -- Improvin Numeracy of Pretrained Language Models"
LLMs4OL: Large Language Models for Ontology Learning
AI Assistant for Customer Support
This project is based on fine-tuning LLM models (FLAN-T5) for text summarisation task using PEFT approach. All evaluation metrics being computed on ROUGE scoring and LoRA optimisation techniques being used for fine-tuning.
Text-To-Text Textbots to Demonstrate Output Differences Between Models Trained on Filtered/Unfiltered Datasets for HSS4 - The Modern Context: Select Figures and Topics
In-context learning, Fine-Tuning, RLHF on Flan-T5
Project based on PyTorch-lightning and Transformers for training Seq2SeqLM models, with a primary focus on MT5 and FLAN-T5, yet not limited to them
Add a description, image, and links to the flan-t5 topic page so that developers can more easily learn about it.
To associate your repository with the flan-t5 topic, visit your repo's landing page and select "manage topics."