Uncertainty in Physics and AI: Taxonomy, Quantification, and Validation
Manuel Haußmann, Ramon Winterhalder, Maria Ubiali
TLDR
This paper offers a unified taxonomy, quantification, and validation framework for uncertainty in ML for physics applications.
Key contributions
- Introduces a unified taxonomy for uncertainty in ML for physics.
- Clarifies predictive and inference uncertainties across frequentist and Bayesian frameworks.
- Discusses principled validation tools: coverage, calibration, bias tests, and scoring rules.
- Illustrates concepts with simple regression and classification examples.
Why it matters
Reliable uncertainty quantification is crucial for applying machine learning in physics, where scientific discoveries depend on validated probabilistic statements. This paper provides a foundational framework, enabling more trustworthy and interpretable AI applications in scientific research.
Original Abstract
Reliable uncertainty quantification is essential for the use of machine learning in physics, where scientific discoveries depend on validated probabilistic statements. We provide a structured overview of uncertainty quantification in ML for physics, introducing a unified taxonomy of uncertainty and clarifying the interpretation of predictive and inference uncertainties across frequentist and Bayesian frameworks. We discuss principled validation tools, including coverage, calibration, bias tests, and proper scoring rules, and illustrate them with simple regression and classification examples.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.