ArXiv TLDR

Efficient Memory Management for Large Language Model Serving with PagedAttention

🐦 Tweet
2309.06180

Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng + 4 more

cs.LGcs.DC

TLDR

PagedAttention introduces a virtual memory-inspired method to efficiently manage key-value cache memory in large language model serving, significantly boosting throughput and reducing memory waste.

Key contributions

  • Proposes PagedAttention, an attention algorithm leveraging paging techniques to minimize KV cache memory fragmentation and duplication.
  • Develops vLLM, a serving system that achieves near-zero KV cache memory waste and enables flexible KV cache sharing across requests.
  • Demonstrates 2-4× throughput improvement over state-of-the-art systems like FasterTransformer and Orca, especially for longer sequences and larger models.

Why it matters

Efficient memory management is critical for scaling large language model serving to handle many concurrent requests without excessive resource use. By applying virtual memory concepts to KV cache management, this paper addresses a key bottleneck that limits batch size and throughput. The resulting system, vLLM, enables more cost-effective and scalable deployment of LLMs, which is essential as models grow larger and more complex.

Original Abstract

High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time. However, existing systems struggle because the key-value cache (KV cache) memory for each request is huge and grows and shrinks dynamically. When managed inefficiently, this memory can be significantly wasted by fragmentation and redundant duplication, limiting the batch size. To address this problem, we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage. Our evaluations show that vLLM improves the throughput of popular LLMs by 2-4$\times$ with the same level of latency compared to the state-of-the-art systems, such as FasterTransformer and Orca. The improvement is more pronounced with longer sequences, larger models, and more complex decoding algorithms. vLLM's source code is publicly available at https://github.com/vllm-project/vllm

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.