LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
-
Updated
Feb 1, 2024 - Python
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
A collection of resources on applications of multi-modal learning in medical imaging.
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
A Framework of Small-scale Large Multimodal Models
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
Open Platform for Embodied Agents
This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.
A curated list of awesome Multimodal studies.
This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"
Awesome multi-modal large language paper/project, collections of popular training strategies, e.g., PEFT, LoRA.
Add a description, image, and links to the large-multimodal-models topic page so that developers can more easily learn about it.
To associate your repository with the large-multimodal-models topic, visit your repo's landing page and select "manage topics."