ArXiv TLDR

Quantization robustness from dense representations of sparse functions in high-capacity kernel associative memory

🐦 Tweet
2604.20333

Akira Tamamori

cs.NE

TLDR

This paper reveals that high-capacity kernel memories are robust to quantization but sensitive to pruning due to a "sparse function, dense representation" principle.

Key contributions

  • Developed a geometric theory for robust encoding in KLR-trained Hopfield networks.
  • Discovered KLR networks are highly robust to quantization but sensitive to pruning.
  • Explained this robustness through a "sparse function, dense representation" principle.
  • Provides insights for designing hardware-efficient kernel associative memories.

Why it matters

This paper addresses the computational cost of high-capacity kernel associative memories. By revealing their unique quantization robustness and explaining it with a novel geometric theory, it paves the way for more hardware-efficient designs. This work also offers fundamental insights into robust neural representations.

Original Abstract

High-capacity associative memories based on Kernel Logistic Regression (KLR) are known for their exceptional performance but are hindered by high computational costs. This paper investigates the compressibility of KLR-trained Hopfield networks to understand the geometric principles of its robust encoding. We provide a comprehensive geometric theory based on spontaneous symmetry breaking and Walsh analysis, and validate it with compression experiments (quantization and pruning). Our experiments reveal a striking contrast: the network is extremely robust to low-precision quantization but highly sensitive to pruning. Our theory explains this via a ``sparse function, dense representation'' principle, where a sparse input mapping is implemented with a dense, bimodal parameterization. Our findings not only provide a practical path to hardware-efficient kernel memories but also offer new insights into the geometric principles of robust representation in neural systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.