ArXiv TLDR

Tempered Guided Diffusion

🐦 Tweet
2605.03712

Andreas Makris, Paul Fearnhead, Chris Nemeth

stat.MLcs.LG

TLDR

Tempered Guided Diffusion (TGD) improves training-free conditional diffusion by efficiently sampling plausible trajectories for better posterior approximation.

Key contributions

  • Introduces Tempered Guided Diffusion (TGD) for efficient training-free conditional diffusion sampling.
  • Uses sequential Monte Carlo to reweight and resample particles, focusing on plausible trajectories.
  • Provides theoretical consistency for posterior approximation with increasing particle count.
  • Accelerated TGD (A-TGD) prunes trajectories for faster sampling in expensive reconstruction tasks.

Why it matters

Existing training-free conditional diffusion methods struggle with efficiency and quality consistency. TGD addresses this by intelligently allocating computation, focusing on promising trajectories. This leads to more accurate posterior approximations and better performance, especially for complex inverse problems, making conditional diffusion more practical.

Original Abstract

Training-free conditional diffusion provides a flexible alternative to task-specific conditional model training, but existing samplers often allocate computation inefficiently: independent guided trajectories can vary widely in quality, and additional function evaluations along a single trajectory may not recover from poor early decisions. We propose Tempered Guided Diffusion (TGD), an annealed sequential Monte Carlo framework for training-free conditional sampling with diffusion priors. TGD targets tempered posterior distributions over the clean signal, using noisy diffusion states only as auxiliary variables for proposing reconstructions and propagating particles. Particles are reweighted by incremental likelihood ratios, resampled, and propagated across noise levels, concentrating computation on trajectories plausible under both the prior and observation. Under idealized exact-reconstruction assumptions, full TGD yields a consistent particle approximation to the posterior as the number of particles grows. For expensive reconstruction tasks, Accelerated TGD (A-TGD) retains early particle exploration but prunes to a single high-likelihood trajectory partway through sampling. Experiments on a controlled two-dimensional inverse problem and image inverse problems show improved posterior approximation and favorable wall-clock speed-quality tradeoffs over independent multi-trajectory baselines.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.