Collective Kernel EFT for Pre-activation ResNets
Hidetoshi Kawase, Toshihiro Ota
TLDR
This paper develops a collective kernel effective field theory (EFT) for pre-activation ResNets, analyzing its accuracy and limitations.
Key contributions
- Developed a collective kernel EFT for pre-activation ResNets using a G-only closure hierarchy.
- Derived an exact stochastic recursion for G, then approximated with ODEs for K0, V4, and K1,EFT.
- Found K0 remains accurate, but V4 and K1,EFT approximations fail due to G-only closure limitations.
- Suggested extending the state space beyond G-only to include the sigma-kernel for better accuracy.
Why it matters
This paper advances our understanding of kernel dynamics in finite-width ResNets. It reveals the limitations of G-only effective field theories, suggesting the need for extended state spaces for more accurate deep learning models.
Original Abstract
In finite-width deep neural networks, the empirical kernel $G$ evolves stochastically across layers. We develop a collective kernel effective field theory (EFT) for pre-activation ResNets based on a $G$-only closure hierarchy and diagnose its finite validity window. Exploiting the exact conditional Gaussianity of residual increments, we derive an exact stochastic recursion for $G$. Applying Gaussian approximations systematically yields a continuous-depth ODE system for the mean kernel $K_0$, the kernel covariance $V_4$, and the $1/n$ mean correction $K_{1,\mathrm{EFT}}$, which emerges diagrammatically as a one-loop tadpole correction. Numerically, $K_0$ remains accurate at all depths. However, the $V_4$ equation residual accumulates to an $O(1)$ error at finite time, primarily driven by approximation errors in the $G$-only transport term. Furthermore, $K_{1,\mathrm{EFT}}$ fails due to the breakdown of the source closure, which exhibits a systematic mismatch even at initialization. These findings highlight the limitations of $G$-only state-space reduction and suggest extending the state space to incorporate the sigma-kernel.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.