ArXiv TLDR

Loop Corrections to the Training and Generalization Errors of Random Feature Models

🐦 Tweet
2604.12827

Taeyoung Kim

cs.LGcs.AIstat.ML

TLDR

A statistical physics approach reveals loop corrections to training and generalization errors in random feature models, improving on mean-kernel approximations.

Key contributions

  • Analyzes random feature models with frozen neural networks using a statistical physics approach.
  • Goes beyond mean-kernel approximations to study training, test, and generalization errors.
  • Derives "loop corrections" from an effective field theory, accounting for higher-order fluctuations.
  • Provides scaling laws for these corrections and validates them experimentally.

Why it matters

This work advances our understanding of random feature models by moving beyond simplified mean-kernel approximations. By introducing loop corrections, it provides a more accurate and comprehensive framework for predicting generalization errors. This is crucial for developing more robust and reliable machine learning models.

Original Abstract

We investigate random feature models in which neural networks sampled from a prescribed initialization ensemble are frozen and used as random features, with only the readout weights optimized. Adopting a statistical-physics viewpoint, we study the training, test, and generalization errors beyond the mean-kernel approximation. Since the predictor is a nonlinear functional of the induced random kernel, the ensemble-averaged errors depend not only on the mean kernel but also on higher-order fluctuation statistics. Within an effective field-theoretic framework, these finite-width contributions naturally appear as loop corrections. We derive the loop corrections to the training, test, and generalization errors, obtain their scaling laws, and support the theory with experimental verification.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.