ArXiv TLDR

AdaMeZO: Adam-style Zeroth-Order Optimizer for LLM Fine-tuning Without Maintaining the Moments

🐦 Tweet
2605.00650

Zhijie Cai, Haolong Chen, Guangxu Zhu

cs.LGcs.AI

TLDR

AdaMeZO is a novel Adam-style zeroth-order optimizer for LLM fine-tuning that avoids moment maintenance, outperforming MeZO with fewer forward passes.

Key contributions

  • Introduces AdaMeZO, an Adam-style zeroth-order optimizer for efficient LLM fine-tuning.
  • Leverages Adam-style moment estimates without memory-intensive storage.
  • Achieves up to 70% fewer forward passes than MeZO, improving convergence speed.
  • Demonstrates adaptability to diverse loss landscapes through trajectory analysis.

Why it matters

Fine-tuning large language models often requires significant GPU memory, limiting accessibility. While MeZO reduces memory, it suffers from slow convergence. AdaMeZO offers a solution by providing Adam-style speed without the high memory cost, making LLM fine-tuning more efficient and accessible.

Original Abstract

Fine-tuning LLMs is necessary for various dedicated downstream tasks, but classic backpropagation-based fine-tuning methods require substantial GPU memory. To this end, a recent work, MeZO, which relies solely on forward passes to fine-tune LLMs, significantly reduces GPU requirements at the cost of slower convergence due to its indifference to loss landscapes. Standard solutions, such as Adam, explore loss landscapes by estimating the first- and second-order moments and storing them in memory to guide the model's movement through dimensions with lower curvature and vice versa. However, directly applying Adam negates MeZO's advantage as it will triple the memory requirement. In light of this, we propose AdaMeZO, a zeroth-order optimizer that leverages Adam-style first- and second-moment estimates without maintaining them in memory. We present a theoretical analysis of AdaMeZO, corroborated by extensive experiments demonstrating AdaMeZO's performance, showing that AdaMeZO can outperform MeZO while requiring up to $70\%$ fewer forward passes. Trajectory visualizations affirm AdaMeZO's ability to adapt to diverse loss landscapes.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.