ArXiv TLDR

Search Your Block Floating Point Scales!

🐦 Tweet
2605.12464

Tanmaey Gupta, Hayden Prairie, Xiaoxia Wu, Reyna Abhyankar, Qingyang Wu + 8 more

cs.LGcs.ARcs.PF

TLDR

ScaleSearch optimizes Block Floating Point quantization scales by searching for minimal error, significantly improving generative model performance.

Key contributions

  • Proposes ScaleSearch, a fine-grained search for BFP scales to minimize quantization error.
  • Integrates with existing PTQ and low-precision attention methods, boosting their performance.
  • Introduces ScaleSearchAttention, an NVFP4-based attention algorithm for causal language models.
  • Reduces NVFP4 quantization error by 27% and improves LM PTQ by up to 15 points.

Why it matters

Quantization is crucial for accelerating large generative models. This paper addresses a key limitation in Block Floating Point (BFP) quantization by optimizing scale selection, leading to significant accuracy improvements without performance loss. This advances the efficient deployment of large language models.

Original Abstract

Quantization has emerged as a standard technique for accelerating inference for generative models by enabling faster low-precision computations and reduced memory transfers. Recently, GPU accelerators have added first-class support for microscaling Block Floating Point (BFP) formats. Standard BFP algorithms use a fixed scale based on the maximum magnitude of the block. We observe that this scale choice can be suboptimal with respect to quantization errors. In this work, we propose ScaleSearch, an alternative strategy for selecting these scale factors: using a fine-grained search leveraging the mantissa bits in microscaling formats to minimize the quantization error for the given distribution. ScaleSearch can be integrated with existing quantization methods such as Post Training Quantization and low-precision attention, and is shown to improve their performance. Additionally, we introduce ScaleSearchAttention, an accelerated NVFP4-based attention algorithm, which uses ScaleSearch and adapted prior techniques to ensure near-0 performance loss for causal language modeling. Experiments show that ScaleSearch reduces quantization error by 27% for NVFP4 and improves language model PTQ by up to 15 points for MATH500 (Qwen3-8B), while ScaleSearchAttention improves Wikitext-2 PPL by upto 0.77 points for Llama 3.1 70B. The proposed methods closely match baseline performance while providing quantization accuracy improvements.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.