MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events
Raunak Agarwal, Markus Wenzel, Simon Baur, Jonas Zimmer, George Harvey + 1 more
TLDR
MADE is a living benchmark for multi-label text classification of medical device adverse events, emphasizing uncertainty quantification for high-stakes healthcare.
Key contributions
- Introduces MADE, a living MLTC benchmark for medical device adverse event reports.
- Continuously updated to prevent data contamination, featuring hierarchical and long-tailed labels.
- Establishes baselines across 20+ models and systematically assesses UQ methods.
- Reveals trade-offs: smaller decoders achieve strong accuracy, generative models offer reliable UQ.
Why it matters
In high-stakes domains like healthcare, reliable uncertainty quantification is crucial for human oversight. MADE provides a robust, continuously updated benchmark to address data contamination and evaluate MLTC models, offering critical insights into UQ performance across diverse architectures. This helps improve trust and safety in AI applications.
Original Abstract
Machine learning in high-stakes domains such as healthcare requires not only strong predictive performance but also reliable uncertainty quantification (UQ) to support human oversight. Multi-label text classification (MLTC) is a central task in this domain, yet remains challenging due to label imbalances, dependencies, and combinatorial complexity. Existing MLTC benchmarks are increasingly saturated and may be affected by training data contamination, making it difficult to distinguish genuine reasoning capabilities from memorization. We introduce MADE, a living MLTC benchmark derived from {m}edical device {ad}verse {e}vent reports and continuously updated with newly published reports to prevent contamination. MADE features a long-tailed distribution of hierarchical labels and enables reproducible evaluation with strict temporal splits. We establish baselines across more than 20 encoder- and decoder-only models under fine-tuning and few-shot settings (instruction-tuned/reasoning variants, local/API-accessible). We systematically assess entropy-/consistency-based and self-verbalized UQ methods. Results show clear trade-offs: smaller discriminatively fine-tuned decoders achieve the strongest head-to-tail accuracy while maintaining competitive UQ; generative fine-tuning delivers the most reliable UQ; large reasoning models improve performance on rare labels yet exhibit surprisingly weak UQ; and self-verbalized confidence is not a reliable proxy for uncertainty. Our work is publicly available at https://hhi.fraunhofer.de/aml-demonstrator/made-benchmark.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.