A Nonlinear Separation Principle: Applications to Neural Networks, Control and Learning
Anand Gokhale, Anton V. Proskurnikov, Yu Kawano, Francesco Bullo
TLDR
This paper introduces a nonlinear separation principle for recurrent neural networks, enabling stable control design and efficient implicit deep learning.
Key contributions
- Introduces a nonlinear separation principle for globally stable interconnected contracting controllers and observers.
- Derives LMI conditions for contractivity of firing-rate and Hopfield RNNs, maximizing weight space for continuous-time models.
- Applies the principle and LMIs to solve output reference tracking for RNN-modeled plants, including integral control.
- Develops an algebraic parameterization of contraction LMIs for expressive implicit neural networks with high accuracy.
Why it matters
This paper provides a foundational nonlinear separation principle, extending stability guarantees to complex RNN architectures. It offers practical LMI-based synthesis methods for robust control and enables the design of highly efficient implicit deep learning models.
Original Abstract
This paper investigates continuous-time and discrete-time firing-rate and Hopfield recurrent neural networks (RNNs), with applications in nonlinear control design and implicit deep learning. First, we introduce a nonlinear separation principle that guarantees global exponential stability for the interconnection of a contracting state-feedback controller and a contracting observer, alongside parametric extensions for robustness and equilibrium tracking. Second, we derive sharp linear matrix inequality (LMI) conditions that guarantee the contractivity of both firing rate and Hopfield neural network architectures. We establish structural relationships among these certificates-demonstrating that continuous-time models with monotone non-decreasing activations maximize the admissible weight space, and extend these stability guarantees to interconnected systems and Graph RNNs. Third, we combine our separation principle and LMI framework to solve the output reference tracking problem for RNN-modeled plants. We provide LMI synthesis methods for feedback controllers and observers, and rigorously design a low-gain integral controller to eliminate steady-state error. Finally, we derive an exact, unconstrained algebraic parameterization of our contraction LMIs to design highly expressive implicit neural networks, achieving competitive accuracy and parameter efficiency on standard image classification benchmarks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.