ArXiv TLDR

Hedging Memory Horizons for Non-Stationary Prediction via Online Aggregation

🐦 Tweet
2605.06541

Yutong Wang, Yannig Goude, Qiwei Yao

cs.LGstat.ML

TLDR

MELO is an online aggregation method that hedges memory horizons to adapt to non-stationary data, outperforming baselines in electricity load forecasting.

Key contributions

  • Introduces MELO, a model-agnostic online aggregation method for non-stationary prediction.
  • Hedges across various adaptation scales using EWLS experts and parameter-free MLpol aggregation.
  • Provides deterministic oracle inequalities, competing with the best raw and affine-combined predictors.
  • Reduces RMSE by 34.7% in electricity load forecasting during COVID-19 without external covariates.

Why it matters

This paper addresses online prediction in non-stationary environments, a significant challenge. MELO provides a robust solution that adapts effectively without costly retraining, demonstrating strong practical utility in energy forecasting.

Original Abstract

We study online prediction under distribution shift, where inputs arrive chronologically and outcomes are revealed only after prediction. In this setting, predictors must remain stable in quiet regimes yet adapt when regimes shift, and the right adaptation memory is unknown in advance. We propose MELO (Memory-hedged Exponentially Weighted Least-Squares Online aggregation), a model-agnostic method that hedges across adaptation scales: it wraps any non-anticipating base-predictor pool with exponentially weighted least-squares (EWLS) adaptation experts at multiple forgetting factors, and aggregates raw and EWLS-adapted forecasts with MLpol, a parameter-free online aggregation rule. Under boundedness conditions, we establish deterministic oracle inequalities showing that it competes with both the best raw predictor and the best bounded, time-varying affine combinations of the base predictions, up to a path-length-dependent tracking cost and a sublinear aggregation overhead. We evaluate MELO on French national electricity-load forecasting through the COVID-19 lockdown using no regime indicators, lockdown dates, or policy covariates. MELO reduces overall RMSE by 34.7\% relative to base-only MLpol and achieves lower overall RMSE than a TabICL reference supplied with an external COVID policy-response covariate. Moreover, MELO requires only lightweight per-step recursive updates without model retraining.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.