llava
Here are 106 public repositories matching this topic...
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
Updated
May 28, 2024 - Python
SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild
-
Updated
May 23, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
Jun 3, 2024 - Python
A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
-
Updated
Jun 3, 2024 - Python
[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
-
Updated
May 20, 2024 - Python
Effective prompting for Large Multimodal Models like GPT-4 Vision, LLaVA or CogVLM. 🔥
-
Updated
Feb 13, 2024 - Python
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
-
Updated
May 29, 2024 - Python
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
-
Updated
May 3, 2024 - Python
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
-
Updated
Feb 29, 2024 - Python
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 50+ HF models, 20+ benchmarks
-
Updated
May 31, 2024 - Python
A Framework of Small-scale Large Multimodal Models
-
Updated
Jun 3, 2024 - Python
Tag manager and captioner for image datasets
-
Updated
Jun 2, 2024 - Python
RestAI is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex, Ollama and HF Pipelines. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama. Precise embeddings usage and tuning.
-
Updated
May 31, 2024 - Python
Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"
-
Updated
Sep 5, 2023 - Python
Improve this page
Add a description, image, and links to the llava topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llava topic, visit your repo's landing page and select "manage topics."