ArXiv TLDR

Amortized Variational Inference for Joint Posterior and Predictive Distributions in Bayesian Uncertainty Quantification

🐦 Tweet
2605.03710

Nan Feng, Xun Huan

stat.MLcs.AIcs.LGstat.COstat.ME

TLDR

An amortized variational Bayesian method jointly learns posterior and predictive distributions, enhancing accuracy and efficiency in uncertainty quantification.

Key contributions

  • Introduces a variational Bayesian framework for joint posterior and predictive distribution learning.
  • Employs amortized training to enable efficient online predictive inference.
  • Achieves more accurate predictive distributions than traditional two-stage methods.
  • Significantly reduces computational cost for online uncertainty quantification.

Why it matters

Bayesian uncertainty quantification is crucial but often computationally intensive. This paper offers a novel approach that streamlines the process by jointly learning posterior and predictive distributions. Its amortized training significantly boosts efficiency and accuracy, making advanced uncertainty quantification more practical for complex models.

Original Abstract

Bayesian predictive inference propagates parameter uncertainty to quantities of interest through the posterior-predictive distribution. In practice, this is typically performed using a two-stage procedure: first approximating the posterior distribution of model parameters, and then propagating posterior samples through the predictive model via Monte Carlo simulation. This sequential workflow can be computationally demanding, particularly for high-fidelity models such as those governed by partial differential equations. We propose a variational Bayesian framework that directly targets the posterior-predictive distribution and jointly learns variational approximations of both the posterior and the corresponding predictive distribution. The formulation introduces a variational upper bound on the Kullback--Leibler divergence together with moment-based regularization terms. The variational distributions are trained in an amortized manner, shifting computational effort to an offline stage and enabling efficient online inference. Numerical experiments ranging from analytical benchmarks to a finite-element solid mechanics problem demonstrate that the proposed method achieves more accurate predictive distributions than conventional two-stage variational inference, while substantially reducing the cost of online predictive inference.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.