Explainability of Recurrent Neural Networks for Enhancing P300-based Brain-Computer Interfaces
Christian Oliva, Vinicio Changoluisa, Francisco B Rodríguez, Luis F Lago-Fernández
TLDR
This paper introduces PRM, an explainable RNN for P300-based BCIs, improving performance by 9% and revealing key spatio-temporal EEG patterns.
Key contributions
- Introduces Post-Recurrent Module (PRM) for RNNs to enhance P300 BCI performance and transparency.
- Achieves a 9% performance improvement over state-of-the-art in P300 signal classification.
- Enables dual spatio-temporal explainability, identifying relevant brain regions and critical time intervals.
- Provides a generalizable framework for explainable EEG models beyond P300, applicable to various tasks.
Why it matters
The paper addresses crucial limitations in P300-based BCIs by improving both performance and model transparency. Its ability to explain decisions in neurophysiologically consistent terms builds trust and facilitates clinical adoption. This framework offers a significant step towards more reliable and understandable EEG-based applications.
Original Abstract
Brain-Computer Interfaces (BCIs) based on P300 event-related potentials offer promising applications in health, education, and assistive technologies. However, challenges related to inter- and intra-subject variability and the explainability of Deep Learning (DL) models limit their practical deployment. In this work, we present the Post-Recurrent Module (PRM), an additional layer designed to improve both performance and transparency, incorporated into a Recurrent Neural Network (RNN) architecture for classifying P300 signals from EEG data. Our approach enables a dual analysis of spatio-temporal signals through both global and local explainability techniques, allowing us not only to identify the most relevant brain regions and critical time intervals involved in classification, but also to interpret model decisions in terms of spatio-temporal EEG patterns consistent with well-stablished neurophysiological descriptions of the P300. Experimental results show a 9\% improvement in performance over state of the art, while also revealing the importance of inter- and intra-subject variability, in alignment with established neuroscience literature. By making model decisions transparent and efficient, we present a framework for explainable EEG-based models. This framework is not limited to more efficient P300 detection, but can be generalized to a wide range of EEG-based tasks. Its ability to identify key spatial and temporal features makes it suitable for applications such as motor imagery, steady-state visual evoked potentials, and even cognitive workload assessment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.