QB-LIF: Learnable-Scale Quantized Burst Neurons for Efficient SNNs
Dewei Bai, Hongxiang Peng, Jiajun Mei, Yang Ren, Hong Qu + 2 more
TLDR
QB-LIF introduces learnable-scale quantized burst neurons for SNNs, boosting accuracy and efficiency by adapting spiking resolution and enabling hardware-friendly inference.
Key contributions
- Introduces QB-LIF neurons, reformulating burst spiking as learnable-scale quantized membrane potentials.
- Develops an absorbable scale strategy, folding learned scales into weights for efficient, accumulate-only inference.
- Proposes ReLSG-ET, a rectified-linear surrogate gradient for stable optimization in discrete multi-level spaces.
- Achieves higher accuracy and ultra-low latency on static and event-driven benchmarks compared to prior SNNs.
Why it matters
Binary SNNs struggle with information throughput, limiting performance in deep networks. QB-LIF addresses this by enabling adaptive, multi-level spiking, significantly improving accuracy and efficiency. This advancement makes SNNs more competitive for low-latency, neuromorphic hardware applications.
Original Abstract
Binary spike coding enables sparse and event-driven computation in spiking neural networks (SNNs), yet its 1-bit-per-timestep representation fundamentally limits information throughput. This bottleneck becomes increasingly restrictive in deep architectures under short simulation horizons. We propose the Quantized Burst-LIF (QB-LIF) neuron, which reformulates burst spiking as a saturated uniform quantization of membrane potentials with a learnable scale. Instead of relying on predefined multi-threshold structures, QB-LIF treats the quantization scale as a trainable parameter, allowing each layer to autonomously adapt its spiking resolution to the underlying membrane-potential statistics. To preserve hardware efficiency, we introduce an absorbable scale strategy that folds the learned quantized scale into synaptic weights during inference, maintaining a strict accumulate-only (AC) execution paradigm. To enable stable optimization in the discrete multi-level space, we further design ReLSG-ET, a rectified-linear surrogate gradient with exponential tails that sustains gradient flow across burst intervals. Extensive experiments on static (CIFAR-10/100, ImageNet) and event-driven (CIFAR10-DVS, DVS128-Gesture) benchmarks demonstrate that QB-LIF consistently outperforms binary and fixed-burst SNNs, achieving higher accuracy under ultra-low latency while preserving neuromorphic compatibility.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.