Case-Specific Rubrics for Clinical AI Evaluation: Methodology, Validation, and LLM-Clinician Agreement Across 823 Encounters
Aaryan Shah, Andrew Hines, Alexia Downs, Denis Bajet, Paulius Mui + 4 more
TLDR
A new method uses case-specific, clinician-authored rubrics to evaluate clinical AI, showing LLMs can match expert agreement at 1000x lower cost.
Key contributions
- Developed a methodology for case-specific, clinician-authored rubrics for clinical AI evaluation.
- Validated rubrics using an LLM-based scoring agent, showing effective discrimination between outputs.
- Demonstrated that LLM-generated rubrics achieve clinician-level ranking agreement (tau: 0.42-0.46).
- Showed LLM rubrics are ~1,000 times more cost-effective than expert review, enabling broader evaluation.
Why it matters
This paper introduces a scalable, cost-effective method for clinical AI evaluation, vital for safe deployment. By validating LLM-generated rubrics against expert clinician judgment, it enables automated AI assessment with clinical rigor, accelerating the integration of reliable AI tools in healthcare.
Original Abstract
Objective. Clinical AI documentation systems require evaluation methodologies that are clinically valid, economically viable, and sensitive to iterative changes. Methods requiring expert review per scoring instance are too slow and expensive for safe, iterative deployment. We present a case-specific, clinician-authored rubric methodology for clinical AI evaluation and examine whether LLM-generated rubrics can approximate clinician agreement. Materials and Methods. Twenty clinicians authored 1,646 rubrics for 823 clinical cases (736 real-world, 87 synthetic) across primary care, psychiatry, oncology, and behavioral health. Each rubric was validated by confirming that an LLM-based scoring agent consistently scored clinician-preferred outputs higher than rejected ones. Seven versions of an EHR-embedded AI agent for clinicians were evaluated across all cases. Results. Clinician-authored rubrics discriminated effectively between high- and low-quality outputs (median score gap: 82.9%) with high scoring stability (median range: 0.00%). Median scores improved from 84% to 95%. In later experiments, clinician-LLM ranking agreement (tau: 0.42-0.46) matched or exceeded clinician-clinician agreement (tau: 0.38-0.43), attributable to both ceiling compression and LLM rubric improvement. Discussion. This convergence supports incorporating LLM rubrics alongside clinician-authored ones. At roughly 1,000 times lower cost, LLM rubrics enable substantially greater evaluation coverage, while continued clinical authorship grounds evaluation in expert judgment. Ceiling compression poses a methodological challenge for future inter-rater agreement studies. Conclusion. Case-specific rubrics offer a path for clinical AI evaluation that preserves expert judgment while enabling automation at three orders lower cost. Clinician-authored rubrics establish the baseline against which LLM rubrics are validated.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.