NeocorRAG: Less Irrelevant Information, More Explicit Evidence, and More Effective Recall via Evidence Chains
Shiyao Peng, Qianhe Zheng, Zhuodi Hao, Zichen Tang, Rongjin Li + 5 more
TLDR
NeocorRAG enhances RAG by optimizing retrieval quality through evidence chains, achieving SOTA performance with significantly fewer tokens.
Key contributions
- Proposes Recall Conversion Rate (RCR) to quantify retrieval's contribution to reasoning accuracy.
- Introduces NeocorRAG, optimizing RAG retrieval quality via systematic mining of Evidence Chains.
- Uses activated search and constrained decoding for precise evidence chain generation.
- Achieves SOTA on benchmarks (HotpotQA, NQ) with 3B/70B models, using <20% of tokens.
Why it matters
RAG often struggles with a trade-off between high recall and retrieval quality, hindering reasoning. NeocorRAG resolves this by optimizing retrieval quality via evidence chains, leading to SOTA performance and efficiency. This training-free paradigm is crucial for practical RAG enhancement.
Original Abstract
Although precise recall is a core objective in Retrieval-Augmented Generation (RAG), a critical oversight persists in the field: improvements in retrieval performance do not consistently translate to commensurate gains in downstream reasoning. To diagnose this gap, we propose the Recall Conversion Rate (RCR), a novel evaluation metric to quantify the contribution of retrieval to reasoning accuracy. Our quantitative analysis of mainstream RAG methods reveals that as Recall@5 improves, the RCR exhibits a near-linear decay. We identify the neglect of retrieval quality in these methods as the underlying cause. In contrast, approaches that focus solely on quality optimization often suffer from inferior recall performance. Both categories lack a comprehensive understanding of retrieval quality optimization, resulting in a trade-off dilemma. To address these challenges, we propose comprehensive retrieval quality optimization criteria and introduce the NeocorRAG framework. This framework achieves holistic retrieval quality optimization by systematically mining and utilizing Evidence Chains. Specifically, NeocorRAG first employs an innovative activated search algorithm to obtain a refined candidate space. Then it ensures precise evidence chain generation through constrained decoding. Finally, the retrieved set of evidence chains guides the retrieval optimization process. Evaluated on benchmarks including HotpotQA, 2WikiMultiHopQA, MuSiQue, and NQ, NeocorRAG achieves SOTA performance on both 3B and 70B parameter models, while consuming less than 20% of tokens used by comparable methods. This study presents an efficient, training-free paradigm for RAG enhancement that effectively optimizes retrieval quality while maintaining high recall. Our code is released at https://github.com/BUPT-Reasoning-Lab/NeocorRAG.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.