ArXiv TLDR

Learning Over-Relaxation Policies for ADMM with Convergence Guarantees

🐦 Tweet
2604.26932

Junan Lin, Paul J. Goulart, Luca Furieri

math.OCcs.LG

TLDR

This paper proposes learning ADMM relaxation policies with convergence guarantees, significantly improving performance on structured convex optimization.

Key contributions

  • Proposes learning online updates for ADMM's relaxation parameter to boost performance.
  • Establishes convergence guarantees for ADMM with time-varying penalty and relaxation parameters.
  • Shows learned policies improve iteration count and wall-clock time on benchmark QPs over baseline OSQP.
  • Emphasizes computational efficiency as relaxation updates avoid costly matrix refactorizations.

Why it matters

ADMM is widely used, and its efficiency is crucial for applications like Model Predictive Control. Learning relaxation parameters offers a practical way to significantly speed up ADMM without expensive recomputations. This work provides both theoretical guarantees and empirical evidence for its effectiveness.

Original Abstract

The Alternating Direction Method of Multipliers (ADMM) is a widely used method for structured convex optimization, and its practical performance depends strongly on the choice of penalty and relaxation parameters. Motivated by settings such as Model Predictive Control (MPC), where one repeatedly solves related optimization problems with fixed structure and changing parameter values, we propose learning online updates of the relaxation parameter to improve performance on problem classes of interest. This choice is computationally attractive in OSQP-like architectures, since adapting relaxation does not trigger the matrix refactorizations associated with penalty updates. We establish convergence guarantees for ADMM with time-varying penalty and relaxation parameters under mild assumptions, and show on benchmark quadratic programs that the resulting learned policies improve both iteration count and wall-clock time over baseline OSQP.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.