ArXiv TLDR

There Will Be a Scientific Theory of Deep Learning

🐦 Tweet
2604.21691

Jamie Simon, Daniel Kunin, Alexander Atanasov, Enric Boix-Adserà, Blake Bordelon + 9 more

stat.MLcs.LG

TLDR

This paper argues a scientific theory of deep learning, termed "learning mechanics," is emerging, characterizing training dynamics and network properties.

Key contributions

  • Identifies five key research areas contributing to a deep learning theory.
  • Proposes "learning mechanics" as a framework for understanding training dynamics.
  • Emphasizes falsifiable quantitative predictions and coarse aggregate statistics.
  • Discusses relationship between learning mechanics and mechanistic interpretability.

Why it matters

This paper argues "learning mechanics" is an emerging scientific theory of deep learning, synthesizing key research on training dynamics and network properties. It offers a crucial framework for understanding DL, addressing skepticism, and guiding future research.

Original Abstract

In this paper, we make the case that a scientific theory of deep learning is emerging. By this we mean a theory which characterizes important properties and statistics of the training process, hidden representations, final weights, and performance of neural networks. We pull together major strands of ongoing research in deep learning theory and identify five growing bodies of work that point toward such a theory: (a) solvable idealized settings that provide intuition for learning dynamics in realistic systems; (b) tractable limits that reveal insights into fundamental learning phenomena; (c) simple mathematical laws that capture important macroscopic observables; (d) theories of hyperparameters that disentangle them from the rest of the training process, leaving simpler systems behind; and (e) universal behaviors shared across systems and settings which clarify which phenomena call for explanation. Taken together, these bodies of work share certain broad traits: they are concerned with the dynamics of the training process; they primarily seek to describe coarse aggregate statistics; and they emphasize falsifiable quantitative predictions. We argue that the emerging theory is best thought of as a mechanics of the learning process, and suggest the name learning mechanics. We discuss the relationship between this mechanics perspective and other approaches for building a theory of deep learning, including the statistical and information-theoretic perspectives. In particular, we anticipate a symbiotic relationship between learning mechanics and mechanistic interpretability. We also review and address common arguments that fundamental theory will not be possible or is not important. We conclude with a portrait of important open directions in learning mechanics and advice for beginners. We host further introductory materials, perspectives, and open questions at learningmechanics.pub.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.