Robust Evaluation of Neural Encoding Models via ground-truth approximation
TLDR
Introduces CPA-PA, a robust evaluation framework for neural encoding models that approximates ground-truth neural activity, outperforming conventional metrics.
Key contributions
- New framework robustly evaluates neural encoding models by approximating ground-truth neural activity.
- Utilizes Canonical Correlation Analysis (CCA) and participant averaging (PA) for ground-truth approximation.
- The CPA-PA metric significantly outperforms conventional scores on synthetic and real MEEG datasets.
- Boosts sensitivity to stimulus-relevant activity and reduces dependence on signal-to-noise ratio.
Why it matters
This paper offers a crucial advancement for neuroscience by providing a robust method to evaluate neural encoding models, overcoming limitations of unknown ground-truth. The CPA-PA metric significantly improves sensitivity and reduces SNR dependence, enabling more reliable interpretation of MEEG data and deeper insights into brain function.
Original Abstract
Encoding models enable measurement of how our brains represent sensory inputs using electro-and magneto-encephalography (MEEG). Evaluating how closely encoding models reflect the underlying brain functions is a crucial premise for model interpretation and hypothesis testing. However, the ground-truth neural activity is unknown, preventing model evaluation with respect to the target neural signal. Existing evaluation metrics must therefore relate model's predictions to noisy MEEG measurements, where most variance is stimulus-unrelated. Here, I introduce an evaluation framework where model predictions are compared to a ground-truth approximation, obtained by aligning MEEG signals with predictions using canonical correlation analysis and via participant averaging. The resulting metric (CPA-PA) yields single-participant evaluations outperforming conventional scores by 300-1000% on synthetic EEG data and 250% on 34 real MEEG datasets (818 datapoints). These gains reflect increased sensitivity to stimulus-relevant neural activity and reduced dependence on SNR, establishing ground-truth approximation as a robust framework for evaluating encoding models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.