ArXiv TLDR

Same Brain, Different Prediction: How Preprocessing Choices Undermine EEG Decoding Reliability

🐦 Tweet
2605.07212

Dengzhe Hou, Zihao Wu, Lingyu Jiang, Zirui Li, Fangzhou Lin + 1 more

cs.LGcs.AIcs.HCcs.NEeess.SP

TLDR

EEG decoding reliability is undermined by preprocessing choices, with up to 42% of predictions flipping, necessitating new tools for stability.

Key contributions

  • EEG predictions are highly unstable; up to 42% of trial-level predictions flip with preprocessing changes.
  • Introduces Walsh-Hadamard decomposition to efficiently analyze preprocessing sensitivity.
  • Proposes Preprocessing Uncertainty (PU) as a per-trial diagnostic for instability.
  • Explores Normalized Adaptive PGI (NA-PGI) as a regularizer to mitigate instability.

Why it matters

This paper reveals a critical flaw in current EEG deep learning: preprocessing choices significantly impact prediction reliability, a factor often overlooked. It quantifies this instability and provides novel tools to measure, decompose, and reduce it. This work is vital for improving the robustness and trustworthiness of EEG-based brain-computer interfaces and clinical applications.

Original Abstract

Electroencephalography (EEG) is a cornerstone of brain-computer interfaces and clinical neuroscience, yet deep learning models are typically trained and evaluated under a single, unreported preprocessing pipeline. We formalize preprocessing choices as a counterfactual intervention space and show that EEG predictions are surprisingly unstable under this space: across six datasets spanning four paradigms, up to 42% of trial-level predictions flip when only the preprocessing changes, a variability that standard uncertainty methods do not explicitly quantify because they condition on a fixed preprocessing pipeline. We provide three tools to make this instability measurable, decomposable, and reducible. First, a Walsh-Hadamard decomposition of the 2^7 pipeline space reveals that sensitivity is near-additive in practice under the binary intervention design, enabling efficient step-by-step optimization. Second, we introduce Preprocessing Uncertainty (PU), a per-trial diagnostic that captures a dimension of instability complementary to model-based confidence. Third, we study Normalized Adaptive PGI (NA-PGI), a graph-structured regularizer that exploits the compositional structure of preprocessing interventions as one mitigation strategy with clear scope conditions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.