ArXiv TLDR

Geometric analysis of attractor boundaries and storage capacity limits in kernel Hopfield networks

🐦 Tweet
2605.00366

Akira Tamamori

cs.NEcs.LG

TLDR

This paper investigates KLR Hopfield networks, revealing high storage capacity (P/N ~20) and that dynamical stability, not geometry, limits their ultimate storage.

Key contributions

  • Analyzes attractor geometry and storage limits in KLR-trained Hopfield networks.
  • Achieves high storage: P/N ≈ 16 for random sequences, P/N ≈ 20 for structured data.
  • Reveals sharp, phase-transition-like attractor boundaries with steep potential barriers.
  • Shows storage limit is due to dynamical stability loss, not geometric separability.

Why it matters

This research clarifies the stability mechanisms of high-capacity associative memories. Identifying dynamical stability as the main storage constraint offers crucial guidance for designing more robust and larger-scale retrieval systems.

Original Abstract

High-capacity associative memories based on Kernel Logistic Regression (KLR) exhibit strong storage capabilities, but the dynamical and geometric mechanisms underlying their stability remain poorly understood. This paper investigates the global geometry of attractor basins and the physical determinants of the storage limit in KLR-trained Hopfield networks. We combine empirical evaluations using random sequences and real-world image embeddings (CIFAR-10) with phenomenological morphing experiments and statistical Signal-to-Noise Ratio (SNR) analysis. Our experiments reveal that the network achieves a storage capacity for random sequences up to $P/N \approx 16$ , and maintains stable retrieval for structured data at effective loads near $P/N \approx 20$ . Through morphing analysis, we reveal that attractors on the "Ridge of Optimization" are separated by sharp, phase-transition-like boundaries, characterized by steep effective potential barriers and critical slowing down. Furthermore, by contrasting an SNR analysis with a geometric reference point inspired by Cover's theorem, we show that the ultimate storage limit is constrained primarily not by a lack of geometric separability in the feature space, but by the loss of dynamical stability against crosstalk noise. These findings suggest that KLR networks function as highly localized, exemplar-based memories that operate optimally just before the onset of dynamical collapse, providing new insights into the design of robust, large-scale retrieval systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.