ArXiv TLDR

Sharp regret-Hellinger bounds for Gaussian empirical Bayes via polynomial approximation

🐦 Tweet
2605.02070

Jiafeng Chen, Yihong Wu

math.STcs.ITecon.EM

TLDR

This paper introduces a novel polynomial approximation technique for Gaussian empirical Bayes, yielding sharper regret-Hellinger bounds directly.

Key contributions

  • Introduces a new polynomial approximation technique for direct regret bounding.
  • Achieves sharper, sometimes optimal, regret-Hellinger bounds for unregularized Bayes rules.
  • Proves O(ε^2 log(1/ε)/loglog(1/ε)) regret for compactly supported priors.
  • Demonstrates regularization is necessary for heavy-tailed priors under bounded moment assumptions.

Why it matters

This work simplifies the analysis of regret bounds in empirical Bayes, offering a more direct and powerful method than prior recursive arguments. It provides sharper theoretical guarantees and clarifies when regularization is truly essential, improving our understanding of statistical estimation.

Original Abstract

A central problem in the theory of empirical Bayes is to control the regret (excess risk) of a learned Bayes rule by the Hellinger distance between the estimated and true marginal densities. In the normal means model, the classical result of Jiang and Zhang (2009, Annals of Statistics) achieves this only after regularizing the Bayes rule and incurs an extraneous cubic logarithmic factor through a delicate recursive argument. This paper introduces a new technique, based on polynomial approximation and Bernstein-type inequalities for weighted $L_2$ norms, that bounds the unregularized regret directly. The method is conceptually simpler and yields sharper, sometimes optimal, regret bounds. For compactly supported priors, we prove the sharp bound that the regret is at most $O(ε^2 \log(1/ε)/\log\log(1/ε))$, where $ε$ is the Hellinger distance between the marginal densities. The same method also extends to priors with exponential tails. Conversely, we show that regularization is genuinely necessary for heavy-tailed priors under only bounded moment assumptions. As a statistical consequence, we obtain improved regret bounds for the nonparametric maximum likelihood estimator.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.