Globally Optimal Training of Spiking Neural Networks via Parameter Reconstruction
Himanshu Udupi, Xiaocong Yang, ChengXiang Zhai
TLDR
This paper introduces a globally optimal parameter reconstruction algorithm for training Spiking Neural Networks, overcoming surrogate gradient limitations.
Key contributions
- Extends convexification theory to parallel recurrent threshold networks, encompassing SNNs.
- Proposes a novel parameter reconstruction algorithm for SNN training.
- Achieves consistent and significant performance advantages over existing SNN training methods.
- Demonstrates data scalability and robustness, enabling large-scale SNN applications.
Why it matters
Training Spiking Neural Networks (SNNs) has been challenging due to non-differentiable spike functions. This work offers a new globally optimal training method that avoids approximation errors, leading to more accurate and robust SNNs. It paves the way for more efficient and powerful biologically plausible AI.
Original Abstract
Spiking Neural Networks (SNNs) have been proposed as biologically plausible and energy-efficient alternatives to conventional Artificial Neural Networks (ANNs). However, the training of SNN usually relies on surrogate gradients due to the non-differentiability of the spike function, introducing approximation errors that accumulate across layers. To address this challenge, we extend the work on convexification of parallel feedforward threshold networks to parallel recurrent threshold networks, which subsume parallel SNNs as a structured special case. Building on this theoretical framework, we propose a parameter reconstruction algorithm for SNN training that demonstrates consistent and significant advantages across various tasks, both as a standalone method and in combination with surrogate-gradient training. The ablations further demonstrate the data scalability and robustness to model configurations of our training algorithm, pointing toward its potential in large-scale SNN training.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.