Predictive Autoscaling for Node.js on Kubernetes: Lower Latency, Right-Sized Capacity
Ivan Tymoshenko, Luca Maraschi, Matteo Collina
TLDR
This paper introduces a predictive autoscaling algorithm for Node.js on Kubernetes that proactively scales to reduce latency and optimize capacity.
Key contributions
- Introduces a predictive autoscaling algorithm for Node.js on Kubernetes to proactively manage capacity.
- Utilizes a cluster-wide aggregate metric to provide a stable signal, invariant under scaling actions.
- Employs a five-stage pipeline and metric model to transform raw data into a clean prediction signal.
- Achieves 26ms median latency, significantly outperforming KEDA (154ms) and HPA (522ms) in benchmarks.
Why it matters
Reactive autoscalers on Kubernetes often fail to meet latency SLOs for Node.js due to their inherent delay. This paper offers a novel predictive approach that proactively scales, drastically reducing latency and improving resource utilization. This is crucial for high-performance, cost-efficient cloud applications.
Original Abstract
Kubernetes offers two default paths for scaling Node.js workloads, and both have structural limitations. The Horizontal Pod Autoscaler scales on CPU utilization, which does not directly measure event loop saturation: a Node.js pod can queue requests and miss latency SLOs while CPU reports moderate usage. KEDA extends HPA with richer triggers, including event-loop metrics, but inherits the same reactive control loop, detecting overload only after it has begun. By the time new pods start and absorb traffic, the system may already be degraded. Lowering thresholds shifts the operating point but does not change the dynamic: the scaler still reacts to a value it has already crossed, at the cost of permanent over-provisioning. We propose a predictive scaling algorithm that forecasts where load will be by the time new capacity is ready and scales proactively based on that forecast. Per-instance metrics are corrupted by the scaler's own actions: adding an instance redistributes load and changes every metric, even if external traffic is unchanged. We observe that operating on a cluster-wide aggregate that is approximately invariant under scaling eliminates this feedback loop, producing a stable signal suitable for short-term extrapolation. We define a metric model (a set of three functions that encode how a specific metric relates to scaling) and a five-stage pipeline that transforms raw, irregularly-timed, partial metric data into a clean prediction signal. In benchmarks against HPA and KEDA under steady ramp and sudden spike, the algorithm keeps per-instance load near the target threshold throughout. Under the steady ramp, median latency is 26ms, compared to 154ms for KEDA and 522ms for HPA.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.