ArXiv TLDR

Combining Convolution and Delay Learning in Recurrent Spiking Neural Networks

🐦 Tweet
2604.15997

Lúcio Folly Sanches Zebendo, Eleonora Cicciarella, Michele Rossi

cs.NE

TLDR

This paper introduces a recurrent SNN architecture combining convolutional connections with learned axonal delays, achieving significant efficiency gains.

Key contributions

  • Introduces convolutional recurrent connections to delay-learning SNNs (DelRec).
  • Achieves 99% reduction in recurrent parameters, significantly lowering memory footprint.
  • Demonstrates 52x faster inference speed compared to previous DelRec.
  • Maintains high accuracy on audio classification tasks.

Why it matters

This work significantly advances recurrent SNNs, making them more practical for resource-constrained edge systems. It offers a solution for deploying complex SNNs with minimal memory and computational overhead, while retaining accuracy.

Original Abstract

Spiking neural networks (SNNs) are rapidly gaining momentum as an alternative to conventional artificial neural networks in resource constrained edge systems. In this work, we continue a recent research line on recurrent SNNs where axonal delays are learned at runtime along with the other network parameters. The first proposed approach, dubbed DelRec, demonstrated the benefit of recurrent delay learning in SNNs. Here, we extend it by advocating the use of convolutional recurrent connections in conjunction with the DelRec delay learning mechanism. According to our tests on an audio classification task, this leads to a streamlined architecture with smaller memory footprint (around 99% savings in terms of number of recurrent parameters) and a much faster (52x) inference time, while retaining DelRec's accuracy. Our code is available at: https://github.com/luciozebendo/delrec_snn/tree/conv_delays

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.