ArXiv TLDR

Benchmarking Optimizers for MLPs in Tabular Deep Learning

🐦 Tweet
2604.15297

Yury Gorishniy, Ivan Rubachev, Dmitrii Feoktistov, Artem Babenko

cs.LG

TLDR

This paper benchmarks optimizers for MLPs on tabular data, finding Muon consistently outperforms AdamW and is a strong alternative.

Key contributions

  • Systematically benchmarked optimizers for MLP-based models on tabular datasets.
  • Discovered the Muon optimizer consistently outperforms AdamW in tabular deep learning.
  • Identified Exponential Moving Average (EMA) as an effective technique for improving AdamW on vanilla MLPs.
  • Recommended Muon as a strong practical choice for practitioners, considering its efficiency overhead.

Why it matters

This research fills a critical gap by systematically evaluating optimizers for tabular deep learning, a domain often overlooked. Discovering Muon's superior performance over the standard AdamW offers practitioners a significant upgrade for training MLPs on tabular data, potentially leading to more accurate and efficient models.

Original Abstract

MLP is a heavily used backbone in modern deep learning (DL) architectures for supervised learning on tabular data, and AdamW is the go-to optimizer used to train tabular DL models. Unlike architecture design, however, the choice of optimizer for tabular DL has not been examined systematically, despite new optimizers showing promise in other domains. To fill this gap, we benchmark \Noptimizers optimizers on \Ndatasets tabular datasets for training MLP-based models in the standard supervised learning setting under a shared experiment protocol. Our main finding is that the Muon optimizer consistently outperforms AdamW, and thus should be considered a strong and practical choice for practitioners and researchers, if the associated training efficiency overhead is affordable. Additionally, we find exponential moving average of model weights to be a simple yet effective technique that improves AdamW on vanilla MLPs, though its effect is less consistent across model variants.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.