ArXiv TLDR

Online Quantile Regression for Nonparametric Additive Models

🐦 Tweet
2604.08969

Haoran Zhan

stat.MLcs.LGmath.ST

TLDR

P-FGD is a new online algorithm for nonparametric additive quantile regression, offering efficient computation and optimal consistency rates.

Key contributions

  • Introduces P-FGD, a projected functional gradient descent algorithm for online nonparametric additive quantile regression.
  • Extends functional stochastic gradient descent to the pinball loss for robust quantile estimation.
  • Offers efficient online learning with O(J_t ln J_t) complexity and O(J_t) prediction time, without storing historical data.
  • Achieves minimax optimal consistency rate O(t^(-2s/(2s+1))) using a novel Hilbert space projection identity.

Why it matters

This paper provides a significant advancement in online quantile regression, offering a highly efficient and theoretically optimal algorithm. Its ability to handle nonparametric additive models without storing historical data makes it practical for large-scale, real-time applications. This improves upon existing methods like RKHS in online learning scenarios.

Original Abstract

This paper introduces a projected functional gradient descent algorithm (P-FGD) for training nonparametric additive quantile regression models in online settings. This algorithm extends the functional stochastic gradient descent framework to the pinball loss. An advantage of P-FGD is that it does not need to store historical data while maintaining $O(J_t\ln J_t)$ computational complexity per step where $J_t$ denotes the number of basis functions. Besides, we only need $O(J_t)$ computational time for quantile function prediction at time $t$. These properties show that P-FGD is much better than the commonly used RKHS in online learning. By leveraging a novel Hilbert space projection identity, we also prove that the proposed online quantile function estimator (P-FGD) achieves the minimax optimal consistency rate $O(t^{-\frac{2s}{2s+1}})$ where $t$ is the current time and $s$ denotes the smoothness degree of the quantile function. Extensions to mini-batch learning are also established.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.