ArXiv TLDR

AGoQ: Activation and Gradient Quantization for Memory-Efficient Distributed Training of LLMs

🐦 Tweet
2605.00539

Wenxiang Lin, Juntao Huang, Luhan Zhang, Laili Li, Xiang Bao + 3 more

cs.CLcs.DC

TLDR

AGoQ introduces novel 4-bit activation and 8-bit gradient quantization to significantly reduce memory and speed up distributed LLM training.

Key contributions

  • Introduces layer-aware activation quantization for near 4-bit storage based on layer types and pipeline stages.
  • Presents 8-bit gradient quantization with precision-preserving All-Reduce for memory and communication efficiency.
  • Reduces memory usage by up to 52% and boosts training speed by 1.34x for LLMs (8B-32B LLaMA).
  • Achieves comparable accuracy on downstream tasks and avoids convergence loss during pretraining.

Why it matters

Current LLM quantization struggles with 4-bit activations and 8-bit gradients, causing accuracy loss. AGoQ enables efficient, high-precision training at these low bit-widths, drastically cutting GPU memory and accelerating distributed LLM training without accuracy degradation.

Original Abstract

Quantization is a key method for reducing the GPU memory requirement of training large language models (LLMs). Yet, current approaches are ineffective for 4-bit activations and 8-bit gradients, which would easily cause slow convergence or accuracy loss. To address this, we introduce AGoQ, incorporating two new techniques: 1) a layer-aware activation quantization algorithm that allocates appropriate bit-widths for activations of various layers based on their types and pipeline stages to achieve near 4-bit activation storage, and 2) a gradient quantization algorithm that reduces memory usage and shortens communication time by employing 8-bit gradient storage and precision-preserving 8-bit All-Reduce communication. We conduct extensive experiments using different sizes of LLMs on two GPU clusters (up to 64 GPUs), and the experimental results show that our AGoQ reduces the memory by up to 52\% and achieves up to 1.34$\times$ improvement of training speed compared to state-of-the-art training systems Megatron-LM (w/ or w/o ZeRO), COAT and DeepSpeed with 8B to 32B LLaMA models, while achieving convergence loss on pretraining and comparable accuracy on downstream tasks with LLaMA architectures.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.