Reducing cross-sample prediction churn in scientific machine learning
Gordan Prastalo, Kevin Maik Jablonka
TLDR
This paper introduces "cross-sample prediction churn" in scientific ML and proposes data-side methods, including "twin-bootstrap," to significantly reduce it.
Key contributions
- Identifies "cross-sample prediction churn," where predictions vary significantly with different training data.
- Shows standard parameter-side methods fail to reduce churn, while data-side methods succeed.
- Proposes "twin-bootstrap," a novel method that reduces churn by a median 45% beyond bagging.
Why it matters
Scientific machine learning often overlooks prediction stability. This work highlights a critical flaw, "churn," showing that common uncertainty quantification methods don't address it. By introducing effective data-side solutions, it pushes for more robust and reliable scientific ML models.
Original Abstract
Scientific machine learning reports predictive performance. It does not report whether the same prediction would survive a different draw of training data. Across $9$ chemistry benchmarks, two classifiers trained on independent bootstraps of the same training set agree on aggregate accuracy to within $1.3\text{--}4.2$ percentage points but disagree on the class label of $8.0\text{--}21.8\%$ of test molecules. We call this gap \emph{cross-sample prediction churn}. The standard parameter-side techniques (deep ensembles, MC dropout, stochastic weight averaging) do not reduce this gap; two data-side methods do. The first is $K$-bootstrap bagging, which cuts the rate $40\text{--}54\%$ on every dataset at no accuracy cost ($K{\times}$-ERM compute). The second is \emph{twin-bootstrap}, our proposal: two networks trained jointly on independent bootstraps with a sym-KL consistency loss between their predictions, which at matched $2{\times}$-ERM compute reduces churn a further median $45\%$ beyond bagging-$K{=}2$. Cross-sample prediction churn deserves a column alongside predictive performance in scientific-ML benchmark reports, because without it the parameter-side and data-side methods are indistinguishable on the metric they actually differ on.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.