ArXiv TLDR

Distributional Off-Policy Evaluation with Deep Quantile Process Regression

🐦 Tweet
2604.18143

Qi Kuang, Chao Wang, Yuling Jiao, Fan Zhou

stat.MLcs.LGstat.ME

TLDR

This paper introduces DQPOPE, a deep quantile process regression method for distributional off-policy evaluation, estimating the full return distribution.

Key contributions

  • Proposes DQPOPE, a novel deep quantile process regression algorithm for distributional off-policy evaluation.
  • Extends deep quantile process regression to estimate continuous quantile functions, not just discrete quantiles.
  • Provides rigorous sample complexity analysis for distributional OPE using deep neural networks.
  • Empirically demonstrates DQPOPE yields more precise and robust policy value estimates than standard methods.

Why it matters

Most OPE methods only estimate the expected return. This paper's DQPOPE estimates the entire return distribution, which is vital for risk-aware decision-making in reinforcement learning. It offers statistically superior and more robust policy evaluations.

Original Abstract

This paper investigates the off-policy evaluation (OPE) problem from a distributional perspective. Rather than focusing solely on the expectation of the total return, as in most existing OPE methods, we aim to estimate the entire return distribution. To this end, we introduce a quantile-based approach for OPE using deep quantile process regression, presenting a novel algorithm called Deep Quantile Process regression-based Off-Policy Evaluation (DQPOPE). We provide new theoretical insights into the deep quantile process regression technique, extending existing approaches that estimate discrete quantiles to estimate a continuous quantile function. A key contribution of our work is the rigorous sample complexity analysis for distributional OPE with deep neural networks, bridging theoretical analysis with practical algorithmic implementations. We show that DQPOPE achieves statistical advantages by estimating the full return distribution using the same sample size required to estimate a single policy value using conventional methods. Empirical studies further show that DQPOPE provides significantly more precise and robust policy value estimates than standard methods, thereby enhancing the practical applicability and effectiveness of distributional reinforcement learning approaches.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.