ArXiv TLDR

FreeScale: Distributed Training for Sequence Recommendation Models with Minimal Scaling Cost

🐦 Tweet
2604.24073

Chenhao Feng, Haoli Zhang, Shakhzod Ali-Zade, Yanli Zhao, Liang Luo + 15 more

cs.LGcs.AIcs.DCcs.IR

TLDR

FreeScale optimizes distributed training for sequence recommendation models, reducing computational bubbles by up to 90.3% on 256 H100 GPUs.

Key contributions

  • Mitigates straggler problems via meticulously load-balanced input samples.
  • Minimizes blocking communication by overlapping prioritized embedding communications with computations.
  • Resolves GPU resource competition during overlapping operations using SM-Free techniques.

Why it matters

This paper addresses critical inefficiencies in large-scale distributed training for recommendation models, specifically computational bubbles caused by stragglers and slow communication. FreeScale offers a significant performance boost, enabling more efficient use of GPU resources and faster training for industrial applications. This is crucial for deploying modern deep learning recommendation systems at scale.

Original Abstract

Modern industrial Deep Learning Recommendation Models typically extract user preferences through the analysis of sequential interaction histories, subsequently generating predictions based on these derived interests. The inherent heterogeneity in data characteristics frequently result in substantial under-utilization of computational resources during large-scale training, primarily due to computational bubbles caused by severe stragglers and slow blocking communications. This paper introduces FreeScale, a solution designed to (1) mitigate the straggler problem through meticulously load balanced input samples (2) minimize the blocking communication by overlapping prioritized embedding communications with computations (3) resolve the GPU resource competition during computation and communication overlapping by communicating through SM-Free techniques. Empirical evaluation demonstrates that FreeScale achieves up to 90.3% reduction in computational bubbles when applied to real-world workloads running on 256 H100 GPUs.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.