RARE: Redundancy-Aware Retrieval Evaluation Framework for High-Similarity Corpora
TLDR
RARE is a new framework that creates realistic RAG benchmarks by accounting for document redundancy, revealing robustness gaps in current retrieval systems.
Key contributions
- Introduces RARE, a framework for evaluating RAG systems in high-similarity, redundant corpora.
- Decomposes documents into atomic facts to precisely track and account for information redundancy.
- Enhances LLM-based data generation with CRRF for more reliable and robust benchmark creation.
- Creates RedQA, a benchmark revealing significant performance drops for retrievers on real-world data.
Why it matters
Existing RAG benchmarks overlook document redundancy, leading to inaccurate evaluations and poor real-world performance. RARE offers a vital framework to create realistic, domain-specific benchmarks, revealing critical robustness gaps and ensuring RAG systems are truly effective.
Original Abstract
Existing QA benchmarks typically assume distinct documents with minimal overlap, yet real-world retrieval-augmented generation (RAG) systems operate on corpora such as financial reports, legal codes, and patents, where information is highly redundant and documents exhibit strong inter-document similarity. This mismatch undermines evaluation validity: retrievers can be unfairly undervalued even when they retrieve documents that provide sufficient evidence, because redundancy across documents is not accounted for in evaluation. On the other hand, retrievers that perform well on standard benchmarks often generalize poorly to real-world corpora with highly similar and redundant documents. We present RARE (Redundancy-Aware Retrieval Evaluation), a framework for constructing realistic benchmarks by (i) decomposing documents into atomic facts to enable precise redundancy tracking and (ii) enhancing LLM-based data generation with CRRF. RAG benchmark data usually requires multiple quality criteria, but LLMs often yield trivial outputs. CRRF scores criteria separately and fuses decisions by rank, improving the reliability of generated data. Applying RARE to Finance, Legal, and Patent corpora, we introduce RedQA, where a strong retriever baseline drops from 66.4% PerfRecall@10 on 4-hop General-Wiki to 5.0-27.9% PerfRecall@10 at 4-hop depth, revealing robustness gaps that current benchmarks fail to capture. RARE enables practitioners to build domain-specific RAG evaluations that faithfully reflect real-world deployment conditions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.