Scalable Memristive-Friendly Reservoir Computing for Time Series Classification
Coşku Can Horuz, Andrea Ceni, Claudio Gallicchio, Sebastian Otte
TLDR
MARS offers scalable memristive-friendly reservoir computing, achieving 21x speedups and outperforming SOTA models for time series classification.
Key contributions
- Proposes MARS: a simplified, scalable parallel memristive-friendly reservoir computing architecture.
- Introduces novel subtractive skip connections for deeper model composition and enhanced effectiveness.
- Achieves up to 21x training speedup over ESN baseline and significantly improved predictive performance.
- Outperforms strong gradient-based models (LRU, S5, Mamba) on several long sequence benchmarks.
Why it matters
This work paves the way for scalable neuromorphic learning systems by combining high predictive capability with radically improved computational efficiency. It offers a clear pathway to energy-efficient, low-latency implementations on emerging memristive and in-memory hardware.
Original Abstract
Memristive devices present a promising foundation for next-generation information processing by combining memory and computation within a single physical substrate. This unique characteristic enables efficient, fast, and adaptive computing, particularly well suited for deep learning applications. Among recent developments, the memristive-friendly echo state network (MF-ESN) has emerged as a promising approach that combines memristive-inspired dynamics with the training simplicity of reservoir computing, where only the readout layer is learned. Building on this framework, we propose memristive-friendly parallelized reservoirs (MARS), a simplified yet more effective architecture that enables efficient scalable parallel computation and deeper model composition through novel subtractive skip connections. This design yields two key advantages: substantial training speedups of up to 21x over the inherently lightweight echo state network baseline and significantly improved predictive performance. Moreover, MARS demonstrates what is possible with parallel memristive-friendly reservoir computing: on several long sequence benchmarks our compact gradient-free models substantially outperform strong gradient-based sequence models such as LRU, S5, and Mamba, while reducing full training time from minutes or hours down seconds or even only a few hundred milliseconds. Our work positions parallel memristive-friendly computing as a promising route towards scalable neuromorphic learning systems that combine high predictive capability with radically improved computational efficiency, while providing a clear pathway to energy-efficient, low-latency implementations on emerging memristive and in-memory hardware.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.