ArXiv TLDR

Benchmarking local Hebbian learning rules for memory storage and prototype extraction

🐦 Tweet
2605.01074

Anders Lansner, Andreas Knoblauch, Naresh B Ravichandran, Pawel Herman

cs.NEcs.LG

TLDR

This paper benchmarks seven Hebbian learning rules for associative memory, finding Bayesian-Hebbian rules offer the highest capacity.

Key contributions

  • Benchmarked seven Hebbian learning rules in recurrent networks.
  • Evaluated pattern storage, weight capacity, and prototype extraction.
  • Identified Bayesian-Hebbian rules as having the highest capacity.
  • Showed original Hebb rule has worst capacity, covariance is robust.

Why it matters

This paper systematically benchmarks Hebbian learning rules for associative memory, a key function in AI and brain science. It highlights superior rules for prototype extraction, guiding the development of more efficient and robust memory systems.

Original Abstract

Associative memory or content-addressable memory is an important component function in computer science and information processing, and at the same time a key concept in cognitive and computational brain science. Many different neural network architectures and learning rules have been proposed to model the brain's associative memory while investigating key component functions like figure-ground segmentation, perceptual reconstruction and rivalry. A less investigated but equally important capability of associative memory is prototype extraction where the training set comprises distorted prototype instances and the task is to recall the correct generating prototype given a new distorted instance. In this paper we benchmark associative memory function of seven different Hebbian learning rules employed in non-modular and modular recurrent networks with winner-take-all dynamics operating on moderately sparse binary patterns. We measure pattern storage and weight information capacity, prototype extraction capabilities, and sensitivity to correlations in data. The original additive Hebb rule comes out with worst capacity, covariance learning proves to be robust but with moderate capacity, and the Bayesian-Hebbian learning rules show highest capacity in almost all different conditions tested.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.