ArXiv TLDR

Entropy-Gradient Grounding: Training-Free Evidence Retrieval in Vision-Language Models

🐦 Tweet
2604.08456

Marcel Gröpl, Jaewoo Jung, Seungryong Kim, Marc Pollefeys, Sunghwan Hong

cs.CVcs.CL

TLDR

This paper introduces Entropy-Gradient Grounding, a training-free method for Vision-Language Models to retrieve visual evidence using uncertainty.

Key contributions

  • Proposes Entropy-Gradient Grounding, a training-free method for test-time evidence retrieval in VLMs.
  • Uses next-token entropy and backpropagation to visual embeddings for relevance maps, avoiding auxiliary detectors.
  • Supports multi-evidence queries with ranked regions and an iterative zoom-and-reground procedure.

Why it matters

This paper addresses a key limitation of VLMs in handling fine-grained details and complex queries by providing a novel, training-free grounding mechanism. It offers a more interpretable way for VLMs to "look" at relevant visual evidence, improving performance on challenging tasks without requiring additional training data or model modifications.

Original Abstract

Despite rapid progress, pretrained vision-language models still struggle when answers depend on tiny visual details or on combining clues spread across multiple regions, as in documents and compositional queries. We address this by framing grounding as test-time evidence retrieval: given a query, the model should actively identify where to look next to resolve ambiguity. To this end, we propose a training-free, model-intrinsic grounding method that uses uncertainty as supervision. Specifically, we compute the entropy of the model's next-token distribution and backpropagate it to the visual token embeddings to obtain an entropy-gradient relevance map, without auxiliary detectors or attention-map heuristics. We then extract and rank multiple coherent regions to support multi-evidence queries, and introduce an iterative zoom-and-reground procedure with a spatial-entropy stopping rule to avoid over-refinement. Experiments on seven benchmarks across four VLM architectures demonstrate consistent improvements over existing methods, with the largest gains on detail-critical and high-resolution settings, while also producing more interpretable evidence localizations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.