ArXiv TLDR

On two ways to use determinantal point processes for Monte Carlo integration

🐦 Tweet
2604.19698

Guillaume Gautier, Rémi Bardenet, Michal Valko

cs.LGmath.ST

TLDR

This paper explores and generalizes two determinantal point process (DPP) methods for Monte Carlo integration, offering improved variance rates.

Key contributions

  • Generalizes Bardenet & Hardy's DPP estimator, known for O(N^(1+1/d)) variance with fixed DPPs.
  • Generalizes Ermakov & Zolotukhin's unbiased DPP estimator, which tailors the DPP to the integrand.
  • Extends both determinantal point process (DPP) estimators to continuous integration settings.
  • Provides practical sampling algorithms for the newly generalized DPP-based Monte Carlo estimators.

Why it matters

This work advances Monte Carlo integration by generalizing and providing algorithms for two powerful DPP-based estimators. It offers more efficient and consistent methods for numerical integration, crucial for many scientific and engineering applications. This makes advanced integration techniques more practical.

Original Abstract

The standard Monte Carlo estimator $\widehat{I}_N^{\mathrm{MC}}$ of $\int fdω$ relies on independent samples from $ω$ and has variance of order $1/N$. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to $f$ and $ω$. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of $\mathcal{O}(N^{-(1+1/d)})$ for smooth $f$, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order $1/N$, like Monte Carlo, but its DPP is tailored to $f$. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.