UniBCI: Towards a Unified Pretrained Model for Invasive Brain-Computer Interfaces
Binjie Hong, Rui Xiong, Liyuan Han, Tielin Zhang
TLDR
UniBCI is a unified pretrained model for invasive Brain-Computer Interfaces, achieving state-of-the-art performance and generalization across diverse tasks.
Key contributions
- Proposes UniBCI, a unified pretrained model for invasive Brain-Computer Interfaces.
- Introduces Context-conditioned Spatio-Temporal Tokenization (CST) for neural signal embedding.
- Uses a hierarchical Interval-Area Attention (IAA) to capture spike dynamics and locality.
- Employs a self-supervised masked signals reconstruction objective for generalizable learning.
Why it matters
Existing BCI models struggle with diverse, complex neural data. UniBCI addresses this with a unified pretrained model, achieving state-of-the-art performance and strong generalization across various tasks. This work is a practical step towards general-purpose neural foundation models for invasive BCIs.
Original Abstract
Modeling invasive neural spike data is fundamental to advancing high-performance brain-computer interfaces (BCIs). However, existing approaches face critical challenges, including limited-scale heterogeneous data, cross-domain distribution shift, and the intrinsic spatiotemporal complexity of invasive neural signals. In this work, we propose UniBCI, a unified pretrained model for invasive Brain-Computer Interfaces. The model integrates three key components: (1) a context-conditioned spatio-temporal tokenization (CST) scheme that embeds neural signals together with metadata into a shared representation space; (2) a hierarchical Interval-Area Attention (IAA) mechanism that captures patterns of spike dynamics in slots via linear attention and locality dependencies via sliding-window attention; and (3) a scalable self-supervised masked signals reconstruction objective for learning generalizable neural representations from large-scale unlabeled data. We construct a pretraining corpus spanning multiple species, subjects, brain regions, and behavioral experiment paradigms. These heterogeneous recordings are standardize via our proposed unified normalization and tokenization. Comprehensive experiments demonstrate that UniBCI achieves SOTA performance across diverse downstream tasks while improving generalization. Moreover, the model achieves a strong balance between accuracy and efficiency, with fewer trainable parameters and lower inference latency. These results suggest that UniBCI provides a practical step toward general-purpose neural foundation models, enabling robust, scalable, and transferable representation learning for invasive neural data. The code for this paper is available at: https://anonymous.4open.science/r/UniBCI-C805.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.