ArXiv TLDR

Learning to Forget: Continual Learning with Adaptive Weight Decay

🐦 Tweet
2604.27063

Aditya A. Ramesh, Alex Lewandowski, Jürgen Schmidhuber

cs.LGcs.NE

TLDR

FADE introduces adaptive per-parameter weight decay for continual learning, improving knowledge retention and capacity management.

Key contributions

  • Introduces Forgetting through Adaptive Decay (FADE) for controlled, per-parameter forgetting.
  • Adapts weight decay rates online using approximate meta-gradient descent.
  • Derived for online linear settings and applied to neural network final layers.
  • Empirically shows FADE discovers distinct decay rates and outperforms fixed weight decay.

Why it matters

Continual learning agents struggle with balancing new knowledge acquisition and old knowledge retention. FADE offers a novel solution by intelligently managing forgetting at a granular, per-parameter level. This approach frees up capacity and significantly improves performance in dynamic learning environments.

Original Abstract

Continual learning agents with finite capacity must balance acquiring new knowledge with retaining the old. This requires controlled forgetting of knowledge that is no longer needed, freeing up capacity to learn. Weight decay, viewed as a mechanism for forgetting, can serve this role by gradually discarding information stored in the weights. However, a fixed scalar weight decay drives this forgetting uniformly over time and uniformly across all parameters, even when some encode stable knowledge while others track rapidly changing targets. We introduce Forgetting through Adaptive Decay (FADE), which adapts per-parameter weight decay rates online via approximate meta-gradient descent. We derive FADE for the online linear setting and apply it to the final layer of neural networks. Our empirical analysis shows that FADE automatically discovers distinct decay rates for different parameters, complements step-size adaptation, and consistently improves over fixed weight decay across online tracking and streaming classification problems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.