Do Vision Language Models Need to Process Image Tokens?
Sambit Ghosh, R. Venkatesh Babu, Chirag Agarwal
TLDR
This paper questions if deep image token processing is always necessary in VLMs, finding visual representations stabilize early while text continues evolving.
Key contributions
- Visual representations in VLMs rapidly converge to a stable, bounded-complexity state early in the network.
- Textual representations, unlike visual ones, continue to undergo substantial restructuring across deeper layers.
- The necessity of deep visual processing is task-dependent; single-token tasks are robust, multi-token generation requires sustained access.
- Truncating visual depth perturbs intermediate reasoning more than final outputs, influencing reasoning structure.
Why it matters
This work challenges the fundamental assumption that deep visual processing is uniformly essential in Vision Language Models. It suggests potential for more efficient VLM architectures by optimizing visual token processing based on task requirements. This could lead to significant computational savings and new design paradigms for multimodal LLMs.
Original Abstract
Vision Language Models (VLMs) have achieved remarkable success by integrating visual encoders with large language models (LLMs). While VLMs process dense image tokens across deep transformer stacks (incurring substantial computational overhead), it remains fundamentally unclear whether sustained image-token processing is necessary for their performance or visual representations meaningfully evolve from early to later layers. In this work, we systematically investigate the functional role of image tokens in VLMs and show that visual representations rapidly converge to a bounded-complexity regime, \ie their entropy stabilizes, intrinsic dimensionality compresses, and trajectory curvature approaches a near-constant profile. In contrast, textual representations continue to undergo substantial restructuring across depth. Once stabilized, visual representations become largely interchangeable between layers, indicating limited additional transformation in deeper stages. Further, depth-wise visual truncation reveals that the necessity of visual processing is task-dependent, where single-token predictions remain comparatively robust to truncated visual depth, but multi-token generation require sustained access to visual representations. Under deterministic decoding, reducing visual depth perturbs intermediate reasoning trajectories more strongly than final outputs, suggesting that image tokens influence the structure of reasoning more than the ultimate conclusions. Collectively, these findings \textbf{question the assumption} that deeper visual processing is uniformly essential in VLMs, challenging the current paradigm of multimodal LLM architectures.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.