ArXiv TLDR

Doubly Outlier-Robust Online Infinite Hidden Markov Model

🐦 Tweet
2604.14322

Horace Yiu, Leandro Sánchez-Betancourt, Álvaro Cartea, Gerardo Duran-Martin

stat.MLcs.LG

TLDR

This paper introduces BR-iHMM, a robust online infinite hidden Markov model that handles outliers and model misspecification in streaming data.

Key contributions

  • Introduces Batched Robust iHMM (BR-iHMM) for online learning with outliers and model misspecification.
  • Defines robustness using the posterior influence function (PIF) and provides theoretical guarantees for bounded PIF.
  • Balances adaptivity and robustness with two tunable parameters to manage adaptation lag for regime switching.
  • Achieves up to 67% reduction in forecasting error on diverse datasets compared to other online Bayesian methods.

Why it matters

This paper offers a practical solution for robust online learning in real-world scenarios where data streams contain outliers and models are imperfect. Its BR-iHMM method significantly improves forecasting accuracy while providing theoretical guarantees for robustness. This makes it valuable for applications requiring reliable, interpretable online analysis.

Original Abstract

We derive a robust update rule for the online infinite hidden Markov model (iHMM) for when the streaming data contains outliers and the model is misspecified. Leveraging recent advances in generalised Bayesian inference, we define robustness via the posterior influence function (PIF), and provide conditions under which the online iHMM has bounded PIF. Imposing robustness inevitably induces an adaptation lag for regime switching. Our method, which is called Batched Robust iHMM (BR-iHMM), balances adaptivity and robustness with two additional tunable parameters. Across limit order book data, hourly electricity demand, and a synthetic high-dimensional linear system, BR-iHMM reduces one-step-ahead forecasting error by up to 67% relative to competing online Bayesian methods. Together with theoretical guarantees of bounded PIF, our results highlight the practicality of our approach for both forecasting and interpretable online learning.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.