ArXiv TLDR

Do Vision-Language Models Truly Perform Vision Reasoning? A Rigorous Study of the Modality Gap

🐦 Tweet
2604.16256

Yige Xu, Yongjie Wang, Zizhuo Wu, Kaisong Song, Jun Lin + 1 more

cs.CVcs.CL

TLDR

Current VLMs primarily reason in textual space, showing a performance gap in vision-grounded reasoning, which CrossMath benchmark and fine-tuning address.

Key contributions

  • Introduces CrossMath, a multimodal benchmark for controlled vision-language reasoning comparisons.
  • Finds a "modality gap": VLMs excel with text-only inputs but degrade with visual data.
  • Shows current VLMs primarily reason in textual space, with limited genuine visual reliance.
  • Fine-tuning on CrossMath training data significantly boosts VLM reasoning performance.

Why it matters

This paper rigorously demonstrates that current VLMs struggle with genuine vision-grounded reasoning, often relying on textual capabilities. The CrossMath benchmark and fine-tuning approach provide a critical tool to diagnose and improve true multimodal reasoning, pushing VLMs towards better integration of visual evidence.

Original Abstract

Reasoning in vision-language models (VLMs) has recently attracted significant attention due to its broad applicability across diverse downstream tasks. However, it remains unclear whether the superior performance of VLMs stems from genuine vision-grounded reasoning or relies predominantly on the reasoning capabilities of their textual backbones. To systematically measure this, we introduce CrossMath, a novel multimodal reasoning benchmark designed for controlled cross-modal comparisons. Specifically, we construct each problem in text-only, image-only, and image+text formats guaranteeing identical task-relevant information, verified by human annotators. This rigorous alignment effectively isolates modality-specific reasoning differences while eliminating confounding factors such as information mismatch. Extensive evaluation of state-of-the-art VLMs reveals a consistent phenomenon: a substantial performance gap between textual and visual reasoning. Notably, VLMs excel with text-only inputs, whereas incorporating visual data (image+text) frequently degrades performance compared to the text-only baseline. These findings indicate that current VLMs conduct reasoning primarily in the textual space, with limited genuine reliance on visual evidence. To mitigate this limitation, we curate a CrossMath training set for VLM fine-tuning. Empirical evaluations demonstrate that fine-tuning on this training set significantly boosts reasoning performance across all individual and joint modalities, while yielding robust gains on two general visual reasoning tasks. Source code is available at https://github.com/xuyige/CrossMath.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.