When to Retrieve During Reasoning: Adaptive Retrieval for Large Reasoning Models
Dongxin Guo, Jikun Wu, Siu Ming Yiu
TLDR
ReaLM-Retrieve adaptively injects evidence during reasoning, improving large model performance and efficiency by detecting knowledge gaps.
Key contributions
- Detects knowledge gaps at the reasoning-step level for precise retrieval.
- Learns an adaptive policy to determine optimal retrieval intervention times.
- Integrates retrieval efficiently, reducing overhead by 3.2x compared to naive methods.
- Improves F1 by 10.1% and reduces retrieval calls by 47% on reasoning tasks.
Why it matters
This paper addresses a critical misalignment between RAG and large reasoning models, enabling more effective and efficient evidence injection during complex multi-step reasoning. It sets new state-of-the-art for reasoning-intensive retrieval tasks.
Original Abstract
Large reasoning models such as DeepSeek-R1 and OpenAI o1 generate extended chains of thought spanning thousands of tokens, yet their integration with retrieval-augmented generation (RAG) remains fundamentally misaligned. Current RAG systems optimize for providing context before reasoning begins, while reasoning models require evidence injection during multi-step inference chains. We introduce ReaLM-Retrieve, a reasoning-aware retrieval framework that addresses this mismatch through three key innovations: (1) a step-level uncertainty detector that identifies knowledge gaps at reasoning-step granularity rather than token or sentence level; (2) a retrieval intervention policy that learns when external evidence maximally benefits ongoing reasoning; and (3) an efficiency-optimized integration mechanism that reduces per-retrieval overhead by 3.2x compared to naive integration. Experiments on MuSiQue, HotpotQA, and 2WikiMultiHopQA demonstrate that ReaLM-Retrieve achieves on average 10.1% absolute improvement in answer F1 over standard RAG (range: 9.0-11.8% across the three benchmarks) while reducing retrieval calls by 47% compared to fixed-interval approaches like IRCoT (all improvements significant at p<0.01, paired bootstrap). On the challenging MuSiQue benchmark requiring 2-4 hop reasoning, our method achieves 71.2% F1 with an average of only 1.8 retrieval calls per question. Analysis shows that ReaLM-Retrieve also improves retrieval quality itself, achieving 81.3% Recall@5 with consistently higher precision and MRR than fixed-interval baselines on supporting evidence, establishing new state-of-the-art efficiency-accuracy trade-offs for reasoning-intensive retrieval tasks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.