Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
-
Updated
May 10, 2024 - Python
Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
Finetuning AlexNet, VGGNet and ResNet with TensorFlow
心理健康大模型、LLM、The Big Model of Mental Health、Finetune、InternLM2、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
Simple python WebUI for fine-tuning ChatGPT (gpt-3.5-turbo)
Gradio wrapper for sd-scripts by kohya
中文AI写作(写诗或写对联)
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching
Tune LLM in few lines of code
A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM
Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
Add a description, image, and links to the finetune topic page so that developers can more easily learn about it.
To associate your repository with the finetune topic, visit your repo's landing page and select "manage topics."