Efficient event-driven retrieval in high-capacity kernel Hopfield networks
TLDR
This paper shows that asynchronous KLR Hopfield networks achieve high capacity and efficient event-driven retrieval, suitable for neuromorphic hardware.
Key contributions
- Asynchronous KLR Hopfield networks maintain high recall accuracy.
- Achieve storage capacities up to P/N ≈ 30, exceeding classical limits.
- Converge efficiently with events close to initial Hamming distance.
- Large-margin attractors enable sparse, event-driven computation.
Why it matters
Traditional high-capacity associative memories are bottlenecked by synchronous updates. This work enables efficient, event-driven retrieval in KLR Hopfield networks, making them suitable for scalable, low-power neuromorphic hardware. It addresses a key challenge for deploying advanced memory models.
Original Abstract
High-capacity associative memory models, such as Kernel Logistic Regression (KLR) Hopfield networks, have demonstrated strong storage capabilities but typically rely on computationally expensive synchronous updates. This reliance poses a bottleneck for deployment on energy-efficient, event-driven neuromorphic hardware. In this paper, we investigate the asynchronous retrieval dynamics of KLR Hopfield networks. We show empirically that, under appropriately tuned kernel parameters, asynchronous sequential updates exhibit trajectories that are statistically indistinguishable from those of synchronous dynamics, while maintaining high recall accuracy within the tested regime for random patterns. Furthermore, we find that the asynchronous network achieves empirical storage capacities approaching $P/N \approx 30$ in static random pattern regimes, exceeding classical limits. To evaluate computational efficiency, we analyze the total number of state transitions (bit flips) required for error correction. The results show that the network converges using a number of events close to the initial Hamming distance from the target pattern, without observable spurious oscillations. These findings suggest that the large-margin attractors induced by KLR learning create a smooth energy landscape suited for sparse, event-driven computation, providing a basis for scalable and low-power associative memory on neuromorphic architectures.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.