ArXiv TLDR

EdgeSpike: Spiking Neural Networks for Low-Power Autonomous Sensing in Edge IoT Architectures

🐦 Tweet
2604.27004

Gustav Olaf Yunus Laitinen-Fredriksson Lundstrom-Imanov, Taner Yilmaz

cs.NEcs.LGeess.SP

TLDR

EdgeSpike is a low-power spiking neural network framework for autonomous sensing in edge IoT, achieving high accuracy with significant energy savings.

Key contributions

  • Unifies hybrid training, hardware-aware NAS, and an event-driven runtime for SNNs.
  • Achieves 18x-47x energy reduction on neuromorphic hardware and 4.6x-7.9x on Cortex-M vs. CNNs.
  • Maintains 91.4% accuracy, close to CNNs, and extends battery life by 6.3x in field deployments.
  • Enables continual on-device adaptation using a lightweight plasticity rule without backpropagation.

Why it matters

This paper is crucial for advancing low-power AI on edge IoT devices. EdgeSpike's significant energy savings and on-device adaptation capabilities make autonomous sensing practical for long-term, battery-constrained deployments. It paves the way for more sustainable and intelligent edge computing.

Original Abstract

We propose EdgeSpike, a co-designed spiking neural network (SNN) framework for autonomous low-power sensing in edge Internet of Things (IoT) architectures. EdgeSpike unifies (i) a hybrid surrogate-gradient and direct-encoding training pipeline, (ii) a hardware-aware neural architecture search (NAS) bounded by per-inference energy and memory budgets, (iii) an event-driven runtime targeting Intel Loihi 2, SpiNNaker 2, and commodity ARM Cortex-M microcontrollers with custom spike-sparse SIMD kernels, and (iv) a lightweight local plasticity rule enabling continual on-device adaptation without backpropagation. The framework is evaluated across five sensing tasks (keyword spotting, vibration-based machine fault detection, surface electromyography gesture recognition, 77 GHz radar human-activity classification, and structural-health acoustic-emission monitoring) on three hardware targets. EdgeSpike achieves a mean classification accuracy of 91.4%, within 1.2 percentage points (pp) of strong INT8 convolutional neural network (CNN) baselines (mean 92.6%), while reducing energy per inference by 18x to 47x on neuromorphic hardware (mean 31x) and by 4.6x to 7.9x on Cortex-M (mean 6.1x). End-to-end latency remains at or below 9.4 ms across all 15 task-hardware configurations. A seven-month, 64-node wireless field deployment confirms a 6.3x extension in projected battery lifetime (from 312 to 1978 days at 2 Wh per node) and bounded accuracy degradation under seasonal drift (0.7 pp with on-device adaptation versus 2.1 pp without). Hardware-aware NAS evaluates 8400 candidates and yields a 12-point Pareto front. EdgeSpike will be released as open source with reproducible training pipelines, hardware-portable runtimes, and benchmark suites.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.