Quantamination: Dynamic Quantization Leaks Your Data Across the Batch
Hanna Foerster, Ilia Shumailov, Cheng Zhang, Yiren Zhao, Jamie Hayes + 1 more
TLDR
Dynamic quantization in ML serving can leak sensitive batch data, exposing user inputs across model inferences.
Key contributions
- Identifies Quantamination, a side-channel leak from dynamic quantization across batch inputs.
- Shows 4 popular ML frameworks default or allow configurations that enable this data leakage.
- Demonstrates attackers can partially or fully recover other users' batched input data.
- Highlights a critical privacy risk in widely used dynamic quantization implementations.
Why it matters
This paper uncovers a serious privacy vulnerability in dynamic quantization used by major ML frameworks. It warns practitioners to reconsider default quantization settings to prevent sensitive data leaks across batch inputs.
Original Abstract
Dynamic quantization emerged as a practical approach to increase the utilization and efficiency of the machine learning serving flow. Unlike static quantization, which applies quantization offline, dynamic quantization operates on tensors at run-time, adapting its parameters to the actual input data. Today's mainstream machine learning frameworks, including ML compilers and inference engines, frequently recommend dynamic quantization as an initial step for optimizing model serving. This is because dynamic quantization can significantly reduce memory usage and computational load, leading to faster token generation and improved model serving efficiency without substantial loss in model accuracy. In this paper, we reveal a critical vulnerability in dynamic quantization: an adversary can exploit such quantization strategy to steal sensitive user data placed in the same batch as the adversary's input. Our analysis demonstrates that dynamic quantization, when improperly implemented or configured, can create side channels that expose information about other inputs within the same batch. We call this phenomenon Quantamination, describing contamination from quantization. Specifically, we show that at least 4 of the most popular ML frameworks in use today either default to or can use configurations that leak data across the batch boundary. This data leakage, in theory, allows attackers to partially or even fully recover other users' batched input data, representing a serious privacy risk for existing ML serving frameworks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.