Evolution of Optimization Methods: Algorithms, Scenarios, and Evaluations
Tong Zhang, Jiangning Zhang, Zhucun Xue, Juntao Jiang, Yicheng Xu + 7 more
TLDR
This paper comprehensively reviews and empirically evaluates deep learning optimization methods, identifying key trends and future research directions.
Key contributions
- Analyzes the evolutionary trajectory of deep learning optimization algorithms.
- Provides a comprehensive empirical evaluation of mainstream optimizers across diverse models and scenarios.
- Distills key emerging trends, fundamental design trade-offs, and future research directions.
- Offers actionable guidance for designing next-generation efficient, robust, and trustworthy optimization methods.
Why it matters
The field lacks a cohesive framework for deep learning optimization. This paper unifies principles and delineates application scenarios, offering actionable guidance for designing next-generation efficient, robust, and trustworthy methods. This is crucial for advancing large-scale deep learning.
Original Abstract
Balancing convergence speed, generalization capability, and computational efficiency remains a core challenge in deep learning optimization. First-order gradient descent methods, epitomized by stochastic gradient descent (SGD) and Adam, serve as the cornerstone of modern training pipelines. However, large-scale model training, stringent differential privacy requirements, and distributed learning paradigms expose critical limitations in these conventional approaches regarding privacy protection and memory efficiency. To mitigate these bottlenecks, researchers explore second-order optimization techniques to surpass first-order performance ceilings, while zeroth-order methods reemerge to alleviate memory constraints inherent to large-scale training. Despite this proliferation of methodologies, the field lacks a cohesive framework that unifies underlying principles and delineates application scenarios for these disparate approaches. In this work, we retrospectively analyze the evolutionary trajectory of deep learning optimization algorithms and present a comprehensive empirical evaluation of mainstream optimizers across diverse model architectures and training scenarios. We distill key emerging trends and fundamental design trade-offs, pinpointing promising directions for future research. By synthesizing theoretical insights with extensive empirical evidence, we provide actionable guidance for designing next-generation highly efficient, robust, and trustworthy optimization methods. The code is available at https://github.com/APRIL-AIGC/Awesome-Optimizer.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.