ArXiv TLDR

Instruction-Evidence Contrastive Dual-Stream Decoding for Grounded Vision-Language Reasoning

🐦 Tweet
2604.25809

Yashwant Pravinrao Bangde, Debaditya Roy

cs.CV

TLDR

IECD2 is a new decoding framework for VLMs that balances linguistic expressiveness with visual grounding to significantly reduce hallucination.

Key contributions

  • Introduces IECD2, a dual-stream decoding framework for VLMs balancing linguistic and visual grounding.
  • Maintains instruction-driven (expressive) and evidence-driven (grounded) token probability streams.
  • Uses a KL-based contrastive gate to adaptively fuse streams, suppressing ungrounded language priors.
  • Demonstrates improved accuracy, reasoning, and substantial hallucination reduction on multiple benchmarks.

Why it matters

VLMs often hallucinate or lack visual grounding. This paper introduces IECD2, a novel decoding method that explicitly addresses this by balancing linguistic fluency with visual evidence. It significantly improves the reliability and trustworthiness of VLM outputs, making them more practical for real-world applications.

Original Abstract

Vision-Language Models (VLMs) exhibit strong performance in instruction following and open-ended vision-language reasoning, yet they frequently generate fluent outputs that are weakly grounded in visual evidence. Prior works have shown that instruction prompting further worsens this issue by amplifying language priors, especially when the visual signal is uncertain or ambiguous. To address this challenge, we propose a decoding framework that explicitly balances linguistic informativeness and visual faithfulness during generation. Our method, Instruction-Evidence Contrastive Dual-Stream Decoding (IECD2), maintains two parallel probability distributions of tokens at each decoding step: an instruction-driven stream that promotes expressive and informative responses, and an evidence-driven stream that enforces strict grounding in the image. These two streams are adaptively fused using a symmetric KL-based contrast-based gate, which suppresses tokens favored by language priors but unsupported by visual evidence, while preserving them when both distributions agree. We evaluate IECD2 on multiple datasets spanning various generative vision-language reasoning tasks such as captioning and visual question answering, including POPE, MME, VQAv2, AMBER, MS-COCO, and LLaVA-Bench. IECD2 demonstrates consistent improvements in task accuracy and reasoning performance, alongside a substantial reduction in hallucination across all evaluation metrics compared to state-of-the-art decoding approaches.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.