VISOR: Agentic Visual Retrieval-Augmented Generation via Iterative Search and Over-horizon Reasoning
Yucheng Shen, Jiulong Wu, Jizhou Huang, Dawei Yin, Lingyong Yan + 1 more
TLDR
VISOR is a new agentic VRAG framework that tackles visual evidence sparsity and search drift for complex, long-horizon visual reasoning tasks.
Key contributions
- Introduces VISOR, a single-agent framework for visual retrieval-augmented generation.
- Uses a Structured Evidence Space for cross-page reasoning and Visual Action Evaluation/Correction.
- Implements Dynamic Trajectory with Sliding Window and Intent Injection to prevent search drift.
- Achieves state-of-the-art performance and efficiency on long-horizon visual reasoning benchmarks.
Why it matters
Existing VRAG systems struggle with complex queries due to scattered visual evidence and context overload. VISOR addresses these by introducing novel mechanisms for structured evidence processing and search drift mitigation. This leads to more efficient and accurate reasoning over visually rich documents.
Original Abstract
Visual Retrieval-Augmented Generation (VRAG) empowers Vision-Language Models to retrieve and reason over visually rich documents. To tackle complex queries requiring multi-step reasoning, agentic VRAG systems interleave reasoning with iterative retrieval.. However, existing agentic VRAG faces two critical bottlenecks. (1) Visual Evidence Sparsity: key evidence is scattered across pages yet processed in isolation, hindering cross-page reasoning; moreover, fine-grained intra-image evidence often requires precise visual actions, whose misuse degrades retrieval quality; (2) Search Drift in Long Horizons: the accumulation of visual tokens across retrieved pages dilutes context and causes cognitive overload, leading agents to deviate from their search objective. To address these challenges, we propose VISOR (Visual Retrieval-Augmented Generation via Iterative Search and Over-horizon Reasoning), a unified single-agent framework. VISOR features a structured Evidence Space for progressive cross-page reasoning, coupled with a Visual Action Evaluation and Correction mechanism to manage visual actions. Additionally, we introduce a Dynamic Trajectory with Sliding Window and Intent Injection to mitigate search drift. They anchor the evidence space while discarding earlier raw interactions, preventing context from being overwhelmed by visual tokens. We train VISOR using a Group Relative Policy Optimization-based Reinforcement Learning (GRPO-based RL) pipeline with state masking and credit assignment tailored for dynamic context reconstruction. Extensive experiments on ViDoSeek, SlideVQA, and MMLongBench demonstrate that VISOR achieves state-of-the-art performance with superior efficiency for long-horizon visual reasoning tasks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.