ArXiv TLDR

A Hybrid Retrieval and Reranking Framework for Evidence-Grounded Retrieval-Augmented Generation

🐦 Tweet
2605.01664

Fariba Afrin Irany, Sampson Akwafuo

cs.IR

TLDR

A hybrid RAG framework combines retrieval, reranking, and claim-level evaluation to achieve 100% grounding accuracy in biomedical Q&A.

Key contributions

  • Introduces a hybrid RAG framework for biomedical and healthcare document question answering.
  • Utilizes Amazon Bedrock, Titan Embeddings, OpenSearch, and Cohere reranking for evidence.
  • Includes a judge model for claim-level evaluation, ensuring 100% grounding accuracy.
  • Demonstrates reliable evidence-grounded responses through hybrid retrieval and reranking.

Why it matters

This paper addresses a critical RAG challenge: ensuring responses are fully supported by evidence. By integrating robust reranking and a claim-level evaluation judge, it significantly boosts RAG system reliability. This approach is vital for high-stakes domains like healthcare.

Original Abstract

Retrieval-augmented generation (RAG) improves large language model reliability by grounding generated responses in external evidence. However, RAG performance depends on the relevance of retrieved passages, the quality of evidence ranking, and the ability to verify whether generated claims are supported by source documents. This study presents a hybrid retrieval and reranking framework for citation-aware RAG in biomedical and healthcare-related document question answering. The framework uses Amazon Bedrock Knowledge Bases for document ingestion, parsing, chunking, embedding generation, and evidence retrieval. Source PDF documents are stored in Amazon S3, embedded using Amazon Titan Text Embeddings V2, and indexed with Amazon OpenSearch Serverless. Hybrid retrieval first retrieves candidate evidence chunks, and Cohere reranking then prioritizes the most relevant passages before answer generation. The answer-generation stage uses top-ranked evidence chunks to produce controlled, evidence-grounded responses, while a separate judge model evaluates each generated factual claim against the retrieved evidence. The framework was evaluated using 25 biomedical NLP and healthcare transformer queries as a pilot-scale proof-of-concept study. Across the evaluation set, the system retrieved and reranked 500 evidence chunks and generated answers from top-ranked evidence. Claim-level grounding evaluation extracted 200 factual claims, all of which were judged to be supported by retrieved evidence, resulting in 100.0% grounding accuracy. The results suggest that hybrid retrieval, reranking, conservative prompting, and claim-level evaluation can support reliable evidence-grounded RAG responses when sufficient source evidence is available.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.