ArXiv TLDR

MedHopQA: A Disease-Centered Multi-Hop Reasoning Benchmark and Evaluation Framework for LLM-Based Biomedical Question Answering

🐦 Tweet
2605.12361

Rezarta Islamaj, Robert Leaman, Joey Chan, Nicholas Wan, Qiao Jin + 11 more

cs.CLcs.AIcs.IR

TLDR

MedHopQA is a new disease-centered multi-hop reasoning benchmark for evaluating LLMs in biomedical QA, designed to resist saturation and contamination.

Key contributions

  • Features 1,000 expert-curated, disease-centered multi-hop QA pairs from two Wikipedia articles.
  • Uses open-ended free-text answers with ontology-grounded synonym sets for robust evaluation.
  • Designed to resist saturation and contamination via a hidden test set and rigorous construction process.
  • Provides a reusable framework for building future biomedical QA datasets prioritizing compositional reasoning.

Why it matters

Existing biomedical QA benchmarks struggle with reasoning, saturation, and contamination. MedHopQA addresses these by focusing on multi-hop reasoning, crucial for clinical tasks like diagnosis and discovery. It offers a robust evaluation framework for advanced LLMs.

Original Abstract

Evaluating large language models (LLMs) in the biomedical domain requires benchmarks that can distinguish reasoning from pattern matching and remain discriminative as model capabilities improve. Existing biomedical question answering (QA) benchmarks are limited in this respect. Multiple-choice formats can allow models to succeed through answer elimination rather than inference, while widely circulated exam-style datasets are increasingly vulnerable to performance saturation and training data contamination. Multi-hop reasoning, defined as the ability to integrate information across multiple sources to derive an answer, is central to clinically meaningful tasks such as diagnostic support, literature-based discovery, and hypothesis generation, yet remains underrepresented in current biomedical QA benchmarks. MedHopQA is a disease-centered multi-hop reasoning benchmark consisting of 1,000 expert-curated question-answer pairs introduced as a shared task at BioCreative IX. Each question requires synthesis of information across two distinct Wikipedia articles, and answers are provided in an open-ended free-text format. Gold annotations are augmented with ontology-grounded synonym sets from MONDO, NCBI Gene, and NCBI Taxonomy to support both lexical and concept-level evaluation. MedHopQA was constructed through a structured process combining human annotation, triage, iterative verification, and LLM-as-a-judge validation. To reduce leaderboard gaming and contamination risk, the 1,000 scored questions are embedded within a publicly downloadable set of 10,000 questions, with answers withheld, on a CodaBench leaderboard. MedHopQA provides both a benchmark and a reusable framework for constructing future biomedical QA datasets that prioritize compositional reasoning, saturation resistance, and contamination resistance as core design constraints.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.