Why Geometric Continuity Emerges in Deep Neural Networks: Residual Connections and Rotational Symmetry Breaking
Kyungwon Jeong, Won-Gi Paeng, Honggyo Suh
TLDR
This paper explains why deep neural networks exhibit geometric continuity, attributing it to residual connections and symmetry-breaking nonlinearities.
Key contributions
- Residual connections align weight updates by creating cross-layer gradient coherence.
- Symmetry-breaking nonlinearities prevent rotation drift, constraining layers to a shared coordinate frame.
- Demonstrates symmetry breaking, not just nonlinearity, is the active ingredient for geometric continuity.
- In transformers, continuity is projection-specific: Q/K/Gate/Up show input-space, O/Down show output-space.
Why it matters
This paper explains the origins of geometric continuity in deep networks, a widely observed but unexplained property. Understanding how residual connections and symmetry-breaking nonlinearities contribute can inform future architectural designs. It provides crucial insights into transformer weight structure.
Original Abstract
Weight matrices in deep networks exhibit geometric continuity -- principal singular vectors of adjacent layers point in similar directions. While this property has been widely observed, its origin remains unexplained. Through experiments on toy MLPs and small transformers, we identify two mechanisms: residual connections create cross-layer gradient coherence that aligns weight updates across layers, and symmetry-breaking nonlinearities constrain all layers to a shared coordinate frame, preventing the rotation drift that would otherwise destabilize weight structure. Crucially, a nonlinear but rotation-preserving activation fails to retain continuity, isolating symmetry breaking -- not nonlinearity itself -- as the active ingredient. Activation and normalization play distinct roles: activation concentrates continuity in the leading singular direction, while normalization distributes it across multiple directions. In transformers, continuity is projection-specific: Q, K, Gate, and Up (which read from the residual stream) develop input-space ($\mathbf{v}_1$) continuity; O and Down (which write to it) develop output-space ($\mathbf{u}_1$) continuity; V alone, lacking an adjacent nonlinearity, develops only low continuity.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.