ArXiv TLDR

OMIBench: Benchmarking Olympiad-Level Multi-Image Reasoning in Large Vision-Language Model

🐦 Tweet
2604.20806

Qiguang Chen, Chengyu Luan, Jiajun Wu, Qiming Yu, Yi Yang + 5 more

cs.CVcs.AIcs.CL

TLDR

OMIBench is a new benchmark evaluating large vision-language models' multi-image reasoning at Olympiad level, revealing significant performance gaps.

Key contributions

  • Introduces OMIBench, a benchmark for multi-image Olympiad-level reasoning in LVLMs.
  • Features problems from biology, chemistry, mathematics, and physics Olympiads.
  • Includes manually annotated rationales and protocols for exact and semantic answer matching.
  • Reveals current LVLMs, like Gemini-3-Pro, achieve only ~50% on multi-image tasks.

Why it matters

This paper addresses a critical gap in LVLM evaluation by focusing on multi-image reasoning, a common challenge in real-world scenarios. OMIBench provides a robust tool to push the boundaries of LVLM capabilities, highlighting areas for significant improvement in complex contextual understanding.

Original Abstract

Large vision-language models (LVLMs) have made substantial advances in reasoning tasks at the Olympiad level. Nevertheless, current Olympiad-level multimodal reasoning benchmarks for these models often emphasize single-image analysis and fail to exploit contextual information across multiple images. We present OMIBench, a benchmark designed to evaluate Olympiad-level reasoning when the required evidence is distributed over multiple images. It contains problems from biology, chemistry, mathematics, and physics Olympiads, together with manually annotated rationales and evaluation protocols for both exact and semantic answer matching. Across extensive experiments on OMIBench, we observe meaningful performance gaps in existing models. Even the strongest LVLMs, such as Gemini-3-Pro, attain only about 50% on the benchmark. These results position OMIBench as a focused resources for studying and improving multi-image reasoning in LVLMs.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.