ArXiv TLDR

Trustworthy Clinical Decision Support Using Meta-Predicates and Domain-Specific Languages

🐦 Tweet
2604.21263

Michael Bouzinier, Sergey Trifonov, Michael Chumack, Eugenia Lvova, Dmitry Etin

cs.AIcs.PLcs.SEq-bio.QM

TLDR

This paper introduces meta-predicates and a DSL to ensure clinical decision support systems use epistemologically appropriate and auditable evidence.

Key contributions

  • Introduces meta-predicates and an epistemological type system to constrain evidence used in clinical decision rules.
  • Enables per-variant audit trails by reformulating decision trees as unate cascades in a domain-specific language.
  • Catches epistemological errors before deployment, enhancing trustworthiness of human-written or AI-generated rules.
  • Complements post-hoc explainability methods by proactively constraining permissible evidence for decision logic.

Why it matters

AI in healthcare needs auditable and epistemologically sound decisions to meet regulatory demands. This paper introduces a proactive framework using meta-predicates to ensure clinical decision support systems use appropriate evidence. This enhances trust by making decision logic transparent and verifiable pre-deployment.

Original Abstract

\textbf{Background:} Regulatory frameworks for AI in healthcare, including the EU AI Act and FDA guidance on AI/ML-based medical devices, require clinical decision support to demonstrate not only accuracy but auditability. Existing formal languages for clinical logic validate syntactic and structural correctness but not whether decision rules use epistemologically appropriate evidence. \textbf{Methods:} Drawing on design-by-contract principles, we introduce meta-predicates -- predicates about predicates -- for asserting epistemological constraints on clinical decision rules expressed in a DSL. An epistemological type system classifies annotations along four dimensions: purpose, knowledge domain, scale, and method of acquisition. Meta-predicates assert which evidence types are permissible in any given rule. The framework is instantiated in AnFiSA, an open-source platform for genetic variant curation, and demonstrated using the Brigham Genomics Medicine protocol on 5.6 million variants from the Genome in a Bottle benchmark. \textbf{Results:} Decision trees used in variant interpretation can be reformulated as unate cascades, enabling per-variant audit trails that identify which rule classified each variant and why. Meta-predicate validation catches epistemological errors before deployment, whether rules are human-written or AI-generated. The approach complements post-hoc methods such as LIME and SHAP: where explanation reveals what evidence was used after the fact, meta-predicates constrain what evidence may be used before deployment, while preserving human readability. \textbf{Conclusions:} Meta-predicate validation is a step toward demonstrating not only that decisions are accurate but that they rest on appropriate evidence in ways that can be independently audited. While demonstrated in genomics, the approach generalises to any domain requiring auditable decision logic.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.