ArXiv TLDR

CLion: Efficient Cautious Lion Optimizer with Enhanced Generalization

🐦 Tweet
2604.14587

Feihu Huang, Guanyi Zhang, Songcan Chen

cs.LGmath.OCstat.ML

TLDR

This paper introduces CLion, a novel optimizer that improves upon Lion by achieving better generalization error and faster convergence for deep learning models.

Key contributions

  • Analyzes Lion optimizer's generalization, proving an error bound of O(1/(Nτ^T)).
  • Proposes CLion, a Cautious Lion optimizer, by carefully applying the sign function.
  • Demonstrates CLion's superior generalization error of O(1/N), outperforming Lion.
  • Shows CLion achieves a fast convergence rate of O(sqrt(d)/T^(1/4)) for nonconvex optimization.

Why it matters

This paper addresses the missing generalization analysis for the popular Lion optimizer. By introducing CLion, it offers a more robust and efficient optimization method with provably better generalization and faster convergence, crucial for training large-scale deep learning models.

Original Abstract

Lion optimizer is a popular learning-based optimization algorithm in machine learning, which shows impressive performance in training many deep learning models. Although convergence property of the Lion optimizer has been studied, its generalization analysis is still missing. To fill this gap, we study generalization property of the Lion via algorithmic stability based on the mathematical induction. Specifically, we prove that the Lion has a generalization error of $O(\frac{1}{Nτ^T})$, where $N$ is training sample size, and $τ>0$ denotes the smallest absolute value of non-zero element in gradient estimator, and $T$ is the total iteration number. In addition, we obtain an interesting byproduct that the SignSGD algorithm has the same generalization error as the Lion. To enhance generalization of the Lion, we design a novel efficient Cautious Lion (i.e., CLion) optimizer by cautiously using sign function. Moreover, we prove that our CLion has a lower generalization error of $O(\frac{1}{N})$ than $O(\frac{1}{Nτ^T})$ of the Lion, since the parameter $τ$ generally is very small. Meanwhile, we study convergence property of our CLion optimizer, and prove that our CLion has a fast convergence rate of $O(\frac{\sqrt{d}}{T^{1/4}})$ under $\ell_1$-norm of gradient for nonconvex stochastic optimization, where $d$ denotes the model dimension. Extensive numerical experiments demonstrate effectiveness of our CLion optimizer.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.