ArXiv TLDR

Visual Instruction Tuning

🐦 Tweet
2304.08485

Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee

cs.CVcs.AIcs.CLcs.LG

TLDR

This paper introduces LLaVA, a large multimodal model trained via GPT-4-generated visual instruction data, achieving strong zero-shot visual-language understanding and state-of-the-art results on Science QA.

Key contributions

  • First use of language-only GPT-4 to generate multimodal language-image instruction-following data for training.
  • Development of LLaVA, an end-to-end large multimodal model combining a vision encoder with a large language model.
  • Demonstrated strong zero-shot multimodal chat abilities and achieved 92.53% accuracy on Science QA, surpassing previous state-of-the-art.

Why it matters

This work pioneers leveraging powerful language models like GPT-4 to generate high-quality multimodal instruction data, enabling effective instruction tuning of vision-language models. By bridging vision and language understanding in a unified framework, LLaVA advances general-purpose AI assistants capable of complex visual and textual reasoning, which is crucial for real-world applications requiring integrated multimodal comprehension.

Original Abstract

Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.