One Pool, Two Caches: Adaptive HBM Partitioning for Accelerating Generative Recommender Serving
Wenjun Yu, Shuguang Han, Amelie Chi Zhou
TLDR
HELM adaptively partitions GPU HBM between embedding and KV caches for generative recommenders, reducing P99 latency by 24-38% across diverse workloads.
Key contributions
- Adaptive Memory Allocation: A PPO-based controller dynamically adjusts HBM partitioning for EMB and KV caches.
- EMB-KV-Aware Scheduling: Routes requests considering cache residency, embedding locality, and node load.
- Reduces P99 latency by 24-38% and achieves 93.5-99.6% SLO satisfaction on A100 clusters.
Why it matters
Generative recommenders struggle with GPU HBM contention between embedding and KV caches, leading to high latency. HELM provides a dynamic solution, significantly improving P99 latency and service level objective satisfaction. This ensures more reliable and efficient serving of complex generative models.
Original Abstract
Generative Recommender (GR) inference places embedding hot caches (EMB) and KV caches in direct competition for limited GPU HBM: allocating more memory to one improves its efficiency but degrades the other. Existing systems optimize them in isolation, overlooking that the optimal EMB-KV allocation ratio can shift by up to 0.35 across workload regimes, leaving 20-30\% latency improvement unrealized. While online reallocation is required to close this gap, naive approaches introduce H2D refill traffic on the critical path, causing P99 SLO violations. To address this, we present HELM, which jointly manages HBM allocation and request routing at runtime through two key components: (1) Adaptive Memory Allocation, a three-layer PPO-based controller (frozen base policy, online residual adapter, and burst-aware recovery controller) that achieves $32\,\mathrm{μs}$ decision latency while staying within 0.024-0.029 of the offline-optimal ratio; and (2) EMB-KV-Aware Scheduling, which routes requests by jointly considering KV residency, embedding locality, and node load to avoid routing inefficiencies under heterogeneous allocations. Evaluations on three production-scale datasets over a 32-node A100 cluster show that HELM reduces P99 latency by 24-38\% over the best static policy and achieves 93.5-99.6\% SLO satisfaction across Steady, Trend, and Burst workloads, significantly outperforming state-of-the-art baselines without sacrificing throughput.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.