ArXiv TLDR

Prefill-Time Intervention for Mitigating Hallucination in Large Vision-Language Models

🐦 Tweet
2604.25642

Chengsheng Zhang, Chenghao Sun, Xinyan Jiang, Wei Li, Xinmei Tian

cs.CVcs.AI

TLDR

Prefill-Time Intervention (PTI) reduces hallucinations in LVLMs by correcting errors in the KV cache during the prefill stage, before accumulation.

Key contributions

  • Proposes Prefill-Time Intervention (PTI) to address LVLM hallucinations by intervening once during the prefill stage.
  • Enhances the initial Key-Value (KV) cache before error accumulation, unlike prior decoding-stage methods.
  • PTI is modality-aware, steering keys for visual objects and values to filter background noise at the source.
  • Significantly mitigates hallucinations, generalizes across LVLMs and benchmarks, and is plug-and-play compatible.

Why it matters

LVLM hallucinations are a critical reliability issue. This paper introduces a novel prefill-time intervention that addresses the root cause of error accumulation. By correcting representations early, it significantly improves factual consistency and generalizability.

Original Abstract

Large Vision-Language Models (LVLMs) have achieved remarkable progress in visual-textual understanding, yet their reliability is critically undermined by hallucinations, i.e., the generation of factually incorrect or inconsistent responses. While recent studies using steering vectors demonstrated promise in reducing hallucinations, a notable challenge remains: they inadvertently amplify the severity of residual hallucinations. We attribute this to their exclusive focus on the decoding stage, where errors accumulate autoregressively and progressively worsen subsequent hallucinatory outputs. To address this, we propose Prefill-Time Intervention (PTI), a novel steering paradigm that intervenes only once during the prefill stage, enhancing the initial Key-Value (KV) cache before error accumulation occurs. Specifically, PTI is modality-aware, deriving distinct directions for visual and textual representations. This intervention is decoupled to steer keys toward visually-grounded objects and values to filter background noise, correcting hallucination-prone representations at their source. Extensive experiments demonstrate PTI's significant performance in mitigating hallucinations and its generalizability across diverse decoding strategies, LVLMs, and benchmarks. Moreover, PTI is orthogonal to existing decoding-stage methods, enabling plug-and-play integration and further boosting performance. Code is available at: https://github.com/huaiyi66/PTI.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.