Symmetry-Protected Lyapunov Neutral Modes in Equivariant Recurrent Networks
TLDR
This paper proves equivariant recurrent networks have symmetry-protected neutral modes, ensuring stable long-term memory for continuous variables.
Key contributions
- Proves a theorem guaranteeing dim(G/H) zero Lyapunov exponents in Lie group equivariant systems.
- Shows these "symmetry-protected modes" ensure neutral directions and stable long-term memory.
- Demonstrates that breaking symmetry creates a "pseudo-gap," predicting finite memory lifetime.
- Validates findings with experiments on various Lie groups and a trained equivariant RNN cell.
Why it matters
Recurrent networks storing continuous variables require stable, long-term memory. This paper provides a theoretical foundation, showing how symmetry guarantees such stability rather than relying on tuning. This understanding is crucial for designing more robust and predictable RNNs with inherent memory capabilities.
Original Abstract
Recurrent networks that store position, phase, or other continuous variables need state-space directions that remain neutral over long horizons. We give a symmetry-based account of when such neutral directions are guaranteed rather than merely tuned. For a finite-dimensional autonomous \(C^1\) vector field equivariant under a Lie group \(G\), we prove that any compact invariant set carrying a uniformly nondegenerate group-orbit bundle with stabilizer type \(H\) has, at points where the Lyapunov spectrum is defined, at least \(\dim(G/H)\) zero Lyapunov exponents tangent to the group orbit. These symmetry-protected modes have zero group-tangent growth because of exact equivariance and orbit geometry. When this protection is explicitly broken, the formerly protected direction can acquire a pseudo-gap; in our controlled breaking experiments this pseudo-gap predicts finite memory lifetime. We verify the finite-dimensional consequences with normalized equivariance error, direct group-tangent exponents, principal-angle alignment, autonomous-flow-zero controls, and orbit-dimension scaling across \(S^1\), \(T^q\), \(SO(n)\), \(U(m)\), product-group, and coupled equivariant RNN-style systems. We also train an exactly equivariant recurrent cell on velocity-input \(S^1\) path integration across six seeds and compare it with matched GRU, LSTM, and orthogonal-RNN baselines. The learned equivariant cell preserves step equivariance to \(3.2\times10^{-8}\), has a near-zero group-tangent exponent under the zero-input autonomous restriction, and improves horizon, speed, and restricted-phase generalization in this matched protocol. The learned task results are consequence evidence; the theorem-level evidence remains exact equivariance, group-tangent exponents, orbit-dimension scaling, and tangent-subspace alignment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.