ArXiv TLDR

Scepsy: Serving Agentic Workflows Using Aggregate LLM Pipelines

🐦 Tweet
2604.15186

Marcel Wagenländer, Otto White, Britannio Jarrett, Pedro Silvestre, Yanda Tao + 4 more

cs.DCcs.AI

TLDR

Scepsy efficiently serves agentic LLM workflows on GPU clusters by predicting aggregate LLM execution shares, boosting throughput and reducing latency.

Key contributions

  • Addresses challenges of serving multi-LLM agentic workflows with unpredictable execution and GPU oversubscription.
  • Introduces Scepsy, a system that uses "Aggregate LLM Pipelines" to predict latency/throughput based on stable LLM execution shares.
  • Optimizes GPU allocations (fractional shares, parallelism, replicas) using a hierarchical heuristic for efficient placement.
  • Achieves up to 2.4x higher throughput and 27x lower latency compared to prior methods.

Why it matters

Agentic workflows are crucial for complex AI tasks but are challenging to serve efficiently due to their dynamic nature and GPU demands. Scepsy offers a novel solution by optimizing GPU allocation based on aggregate LLM behavior, significantly boosting performance and enabling scalable deployment.

Original Abstract

Agentic workflows carry out complex tasks by orchestrating multiple large language models (LLMs) and tools. Serving such workflows at a target throughput with low latency is challenging because they can be defined using arbitrary agentic frameworks and exhibit unpredictable execution times: execution may branch, fan-out, or recur in data-dependent ways. Since LLMs in workflows often outnumber available GPUs, their execution also leads to GPU oversubscription. We describe Scepsy, a new agentic serving system that efficiently schedules arbitrary multi-LLM agentic workflows onto a GPU cluster. Scepsy exploits the insight that, while agentic workflows have unpredictable end-to-end latencies, the shares of each LLM's total execution times are comparatively stable across executions. Scepsy decides on GPU allocations based on these aggregate shares: first, it profiles the LLMs under different parallelism degrees. It then uses these statistics to construct an Aggregate LLM Pipeline, which is a lightweight latency/throughput predictor for allocations. To find a GPU allocation that minimizes latency while achieving a target throughput, Scepsy uses the Aggregate LLM Pipeline to explore a search space over fractional GPU shares, tensor parallelism degrees, and replica counts. It uses a hierarchical heuristic to place the best allocation onto the GPU cluster, minimizing fragmentation, while respecting network topology constraints. Our evaluation on realistic agentic workflows shows that Scepsy achieves up to 2.4x higher throughput and 27x lower latency compared to systems that optimize LLMs independently or rely on user-specified allocations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.