ArXiv TLDR

Strait: Perceiving Priority and Interference in ML Inference Serving

🐦 Tweet
2604.28175

Haidong Zhao, Nikolaos Georgantas

cs.LG

TLDR

Strait is an ML inference serving system that improves deadline satisfaction for dual-priority tasks by accurately predicting and managing GPU contention.

Key contributions

  • Strait enhances deadline satisfaction for dual-priority ML inference tasks under high GPU utilization.
  • It improves latency estimation by modeling data transfer contention and kernel execution interference.
  • Employs an adaptive prediction model for accurate latency estimation and priority-aware scheduling.
  • Reduces high-priority task deadline violations by 1.02-11.18 percentage points with acceptable low-priority costs.

Why it matters

This paper addresses the critical challenge of timely ML inference for prioritized tasks in resource-constrained environments. Strait's novel latency prediction and priority-aware scheduling significantly improve deadline satisfaction, making on-premises ML deployments more reliable and efficient.

Original Abstract

Machine learning (ML) inference serving systems host deep neural network (DNN) models and schedule incoming inference requests across deployed GPUs. However, limited support for task prioritization and insufficient latency estimation under concurrent execution may restrict their applicability in on-premises scenarios. We present \emph{Strait}, a serving system designed to enhance deadline satisfaction for dual-priority inference traffic under high GPU utilization. To improve latency estimation, Strait models potential contention during data transfer and accounts for kernel execution interference through an adaptive prediction model. By drawing on these predictions, it performs priority-aware scheduling to deliver differentiated handling. Evaluation results under intense workloads suggest that Strait reduces deadline violations for high-priority tasks by 1.02 to 11.18 percentage points while incurring acceptable costs on low-priority tasks. Compared to software-defined preemption approaches, Strait also exhibits more equitable performance.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.