ArXiv TLDR

Energy-Efficient Implementation of Spiking Recurrent Cells on FPGA

🐦 Tweet
2605.10679

Pascal Harmeling, Florent De Geeter, Guillaume Drion

cs.NE

TLDR

This paper presents an energy-efficient FPGA accelerator for Spiking Recurrent Cell (SRC) neural networks, balancing biological plausibility and hardware cost.

Key contributions

  • Developed an FPGA accelerator for Spiking Recurrent Cell (SRC) neural networks.
  • Introduced mathematical simplifications to avoid costly unary operators and floating-point.
  • Achieved 96.31% accuracy on MNIST with 1.74ms/digit processing time.
  • Demonstrated 92.89% accuracy at 0.45 mJ/digit using 4-bit quantized weights.

Why it matters

This work addresses the trade-off between biological plausibility and hardware cost in SNNs, which are often too complex or too simple for efficient FPGA implementation. It demonstrates that SRC-based SNNs can deliver competitive performance with significantly reduced energy consumption, while maintaining richer neuronal dynamics.

Original Abstract

Spiking Neural Networks (SNNs) can reduce energy consumption compared to conventional Artificial Neural Networks (ANNs) when spiking activity is sparse and the neuron model is hardware-friendly. However, biologically faithful models are often too costly to implement on FPGAs, whereas very simple models (e.g., IR/LIF) sacrifice part of the neuronal dynamics. In this work, we present an FPGA accelerator for an SNN using Spiking Recurrent Cell (SRC) neurons, providing a trade-off between biological plausibility and hardware cost. We propose a set of mathematical simplifications that remove costly unary operators (\textit{tanh}, \textit{exp}) and avoid floating-point arithmetic through scaling and piecewise-defined approximations. The complete network is implemented in VHDL and validated using spiking traces derived from the MNIST dataset. The weight matrices computed off-line are stored directly in LUT-registers without any adaptation. This demonstrates the robustness of SRC cells. Experiments were conducted on an Artix-7 XC7A200T clocked at 100 MHz. The reference implementation achieves 96.31\% accuracy with a 220-image spiking trace and a processing time of 1.7424 ms per digit. We then investigate accuracy/energy trade-offs by reducing the spiking trace length and quantizing synaptic weights down to 4 bits, achieving 93.32\% accuracy at 0.55 mJ per digit (55 images, 5-bit weights) and 92.89\% at 0.45 mJ (44 images, 4-bit weights). These results show that SRC-based SNNs can deliver competitive performance with reduced energy consumption, while preserving richer neuronal dynamics than standard LIF/IR models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.