ArXiv TLDR

Pion: A Spectrum-Preserving Optimizer via Orthogonal Equivalence Transformation

🐦 Tweet
2605.12492

Kexuan Shi, Hanxuan Li, Zeju Qiu, Yandong Wen, Simon Buchholz + 1 more

cs.LGstat.ML

TLDR

Pion is a novel spectrum-preserving optimizer for LLMs that uses orthogonal transformations to maintain singular values throughout training.

Key contributions

  • Introduces Pion, an LLM optimizer based on orthogonal equivalence transformations.
  • Preserves singular values and spectral norm of weight matrices during training.
  • Modulates weight matrix geometry while keeping spectral properties fixed.
  • Offers stable and competitive performance for LLM pretraining and finetuning.

Why it matters

Pion offers a novel approach to LLM optimization by preserving spectral properties, unlike traditional additive optimizers. This leads to more stable training and competitive performance, providing a valuable alternative for LLM development.

Original Abstract

We introduce Pion, a spectrum-preserving optimizer for large language model (LLM) training based on orthogonal equivalence transformation. Unlike additive optimizers such as Adam and Muon, Pion updates each weight matrix through left and right orthogonal transformations, preserving its singular values throughout training. This yields an optimization mechanism that modulates the geometry of weight matrices while keeping their spectral norm fixed. We derive the Pion update rule, systematically examine its design choices, and analyze its convergence behavior along with several key properties. Empirical results show that Pion offers a stable and competitive alternative to standard optimizers for both LLM pretraining and finetuning.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.