ArXiv TLDR

Robust and Fast Training via Per-Sample Clipping

🐦 Tweet
2605.02701

Davide Nobile, Philipp Grohs

math.OCcs.LGstat.ML

TLDR

PS-Clip-SGD offers robust, fast training with optimal convergence rates, outperforming standard methods even with computational overhead.

Key contributions

  • Introduces PS-Clip-SGD for robust and fast training in non-convex optimization.
  • Achieves optimal in-expectation and high-probability convergence rates under heavy-tailed noise.
  • Empirically outperforms vanilla SGD and standard clipping on AlexNet/CIFAR-100.
  • Shows mini-batch clipping with gradient accumulation improves performance with no extra cost.

Why it matters

This paper introduces PS-Clip-SGD, a robust and fast training method with optimal convergence guarantees for noisy non-convex optimization. It empirically outperforms standard methods, significantly improving deep learning training efficiency and stability. The work also provides practical, efficient clipping strategies for gradient accumulation, challenging common practices.

Original Abstract

We propose a robust gradient estimator based on per-sample gradient clipping and analyze its properties both theoretically and empirically. We show that the resulting method, per-sample clipped SGD (PS-Clip-SGD), achieves optimal in-expectation convergence rates for non-convex optimization problems under heavy-tailed gradient noise. Moreover, we establish high-probability convergence guarantees that match the in-expectation rates up to polylogarithmic factors in the failure probability. We complement our theoretical results with multiple numerical experiments. In particular, we demonstrate that PS-Clip-SGD outperforms both vanilla SGD with momentum and standard gradient clipping when training AlexNet on the CIFAR-100 dataset, even after accounting for the additional computational time caused by per-sample clipping. We also empirically show that, in the presence of gradient accumulation, applying clipping at the mini-batch level can improve training performance while incurring virtually no additional computational cost. This finding is particularly interesting, as it contradicts the common practice of applying clipping only after all accumulation steps have been completed.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.