ArXiv TLDR

Provable Quantization with Randomized Hadamard Transform

🐦 Tweet
2605.13810

Ying Feng, Piotr Indyk, Michael Kapralov, Dmitry Krachun, Boris Prokhorov

cs.LGcs.DS

TLDR

This paper introduces dithered quantization with randomized Hadamard transforms, offering provable, near-optimal MSE with high efficiency.

Key contributions

  • Introduces dithered quantization with a single randomized Hadamard transform.
  • Proves the method is unbiased and achieves MSE bounds matching optimal random rotations.
  • Reduces computational cost to O(d log d) while maintaining strong theoretical guarantees.

Why it matters

This work provides a provably optimal and computationally efficient vector quantization method. It resolves the trade-off between speed and theoretical guarantees for Hadamard transforms, benefiting applications like similarity search and federated learning.

Original Abstract

Vector quantization via random projection followed by scalar quantization is a fundamental primitive in machine learning, with applications ranging from similarity search to federated learning and KV cache compression. While dense random rotations yield clean theoretical guarantees, they require $Θ(d^2)$ time. The randomized Hadamard transform $HD$ reduces this cost to $O(d \log d)$, but its discrete structure complicates analysis and leads to weaker or purely empirical compression guarantees. In this work, we study a variant of this approach: dithered quantization with a single randomized Hadamard transform. Specifically, the quantizer applies $HD$ to the input vector and subtracts a random scalar offset before quantizing, injecting additional randomness at negligible cost. We prove that this approach is unbiased and provides mean squared error bounds that asymptotically match those achievable with truly random rotation matrices. In particular, we prove that a dithered version of TurboQuant achieves mean squared error $\bigl(π\sqrt{3}/2 + o(1)\bigr) \cdot 4^{-b}$ at $b$ bits per coordinate, where the $o(1)$ term vanishes uniformly over all unit vectors and all dimensions as the number of quantization levels grows.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.