ArXiv TLDR

Benchmarking Retrieval Strategies for Biomedical Retrieval-Augmented Generation: A Controlled Empirical Study

🐦 Tweet
2605.02520

Devi Prasad Bal, Subhashree Puhan

cs.CLcs.AIcs.IR

TLDR

This paper systematically compares five retrieval strategies for biomedical RAG, finding Cross-Encoder Reranking performs best.

Key contributions

  • Systematically compared five retrieval strategies (Dense, Hybrid, Reranking, Multi-Query, MMR) for biomedical RAG.
  • Evaluated strategies on 250 BioASQ Q&A pairs using DeepEval metrics: precision, recall, faithfulness, relevancy.
  • Cross-Encoder Reranking achieved the best composite score and highest contextual precision (0.852).
  • All RAG methods significantly improved answer relevancy over a no-context baseline.

Why it matters

This study provides crucial empirical insights into optimal retrieval strategies for RAG in high-stakes biomedical applications. Its findings offer practical guidance for developing more accurate and reliable LLM-based systems, enhancing their utility in critical domains.

Original Abstract

Retrieval-Augmented Generation (RAG) offers a well-established path to grounding large language model (LLM) outputs in external knowledge, yet the question of which retrieval strategy works best in a high-stakes domain such as biomedicine has not received the controlled, multi-metric treatment it deserves. This paper presents a systematic empirical comparison of five retrieval strategies -- Dense Vector Search, Hybrid BM25 + Dense retrieval, Cross-Encoder Reranking, Multi-Query Expansion, and Maximal Marginal Relevance (MMR) -- within a biomedical question-answering RAG pipeline. All strategies share a fixed generation model (GPT-4o-mini), a common vector store (ChromaDB), and OpenAI's text-embedding-3-small embeddings, ensuring that observed differences are attributable to retrieval alone. Evaluation is conducted on 250 question-answer pairs drawn from a preprocessed subset of the BioASQ benchmark (rag-mini-bioasq) using four DeepEval metrics: contextual precision, contextual recall, faithfulness, and answer relevancy, each reported with 95% confidence intervals. A no-context ablation is included as a lower bound. Cross-Encoder Reranking achieves the best composite score (0.827) and highest contextual precision (0.852), confirming that query-document interaction yields measurable retrieval gains. Multi-Query Expansion, despite its recall-oriented design, produces the weakest contextual precision (0.671), suggesting naive query diversification introduces retrieval noise. MMR sacrifices answer relevancy for diversity, while the Dense baseline (composite 0.822) falls within 0.005 points of the top strategy. All RAG conditions dramatically outperform the no-context ablation on answer relevancy (0.658-0.701 vs. 0.287), confirming the practical value of retrieval. The full pipeline, hyperparameters, and evaluation code are publicly available.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.