ArXiv TLDR

HealthNLP_Retrievers at ArchEHR-QA 2026: Cascaded LLM Pipeline for Grounded Clinical Question Answering

🐦 Tweet
2604.26880

Md Biplob Hosen, Md Alomgeer Hussein, Md Akmol Masud, Omar Faruque, Tera L Reynolds + 1 more

cs.CLcs.LG

TLDR

This paper presents HealthNLP_Retrievers' cascaded LLM pipeline using Gemini 2.5 Pro for grounded clinical question answering over EHRs.

Key contributions

  • Developed a multi-stage cascaded LLM pipeline using Gemini 2.5 Pro for EHR-based clinical QA.
  • Includes modules for query reformulation, evidence scoring, and grounded response generation.
  • Features a high-precision framework for many-to-many answer-evidence alignment.
  • Ranked 1st in question interpretation at the ArchEHR-QA 2026 shared task.

Why it matters

This paper addresses the critical need for patients to understand complex EHR information, which direct access alone doesn't ensure. It demonstrates that integrating LLMs into structured pipelines significantly improves the grounding, precision, and professional quality of patient-oriented health communication. This work helps bridge the gap between clinical data and patient comprehension.

Original Abstract

Patient portals now give individuals direct access to their electronic health records (EHRs), yet access alone does not ensure patients understand or act on the complex clinical information contained in these records. The ArchEHR-QA 2026 shared task addresses this challenge by focusing on grounded question answering over EHRs, and this paper presents the system developed by the HealthNLP_Retrievers team for this task. The proposed approach uses a multi-stage cascaded pipeline powered by the Gemini 2.5 Pro large language model to interpret patient-authored questions and retrieve relevant evidence from lengthy clinical notes. Our architecture comprises four integrated modules: (1) a few-shot query reformulation unit which summarizes verbose patient queries; (2) a heuristic-based evidence scorer which ranks clinical sentences to prioritize recall; (3) a grounded response generator which synthesizes professional-caliber answers restricted strictly to identified evidence; and (4) a high-precision many-to-many alignment framework which links generated answers to supporting clinical sentences. This cascaded approach achieved competitive results. Across the individual tracks, the system ranked 1st in question interpretation, 5th in answer generation, 7th in evidence identification, and 9th in answer-evidence alignment. These results show that integrating large language models within a structured multi-stage pipeline improves grounding, precision, and the professional quality of patient-oriented health communication. To support reproducibility, our source code is publicly available in our GitHub repository

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.