Classical and Quantum Speedups for Non-Convex Optimization via Energy Conserving Descent
Yihang Sun, Huaijin Wang, Patrick Hayden, Jose Blanchet
TLDR
New stochastic and quantum Energy Conserving Descent algorithms achieve exponential speedups over gradient descent for non-convex optimization.
Key contributions
- Presents the first analytical study of Energy Conserving Descent (ECD) in a 1D setting.
- Formalizes stochastic ECD (sECD) with energy-preserving noise and a quantum analog (qECD).
- Proves sECD and qECD yield exponential speedups over gradient descent baselines.
- Demonstrates qECD offers further speedup over sECD for objectives with tall energy barriers.
Why it matters
Non-convex optimization is critical for machine learning, yet traditional methods often get stuck in local minima. This work introduces sECD and qECD, novel algorithms that guarantee global convergence. Their demonstrated exponential speedups over gradient descent could significantly advance optimization techniques.
Original Abstract
The Energy Conserving Descent (ECD) algorithm was recently proposed (De Luca & Silverstein, 2022) as a global non-convex optimization method. Unlike gradient descent, appropriately configured ECD dynamics escape strict local minima and converge to a global minimum, making it appealing for machine learning optimization. We present the first analytical study of ECD, focusing on the one-dimensional setting for this first installment. We formalize a stochastic ECD dynamics (sECD) with energy-preserving noise, as well as a quantum analog of the ECD Hamiltonian (qECD), providing the foundation for a quantum algorithm through Hamiltonian simulation. For positive double-well objectives, we compute the expected hitting time from a local to the global minimum. We prove that both sECD and qECD yield exponential speedup over respective gradient descent baselines--stochastic gradient descent and its quantization. For objectives with tall barriers, qECD achieves a further speedup over sECD.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.