ArXiv TLDR

A Unified Measure-Theoretic View of Diffusion, Score-Based, and Flow Matching Generative Models

🐦 Tweet
2605.06829

Aditya Ranganath, Mukesh Singhal

cs.LGcs.CVcs.ETcs.ITcs.NE

TLDR

This paper unifies diffusion, score-based, and flow matching generative models under a measure-theoretic framework, clarifying their shared structure.

Key contributions

  • Derives reverse-time sampling for diffusion and score-based models as controlled stochastic dynamics.
  • Shows probability flow ODEs yield identical marginals, linking diffusion to normalizing flows.
  • Interprets flow matching as direct velocity field regression, clarifying its relation to score-based training.

Why it matters

This paper provides a timely unified framework for continuous-time generative models, addressing fragmented notation and derivations. It clarifies shared structures, practical tradeoffs, and opens avenues for future research in approximation, stability, and scalability.

Original Abstract

We survey continuous-time generative modeling methods based on transporting a simple reference distribution to a data distribution via stochastic or deterministic dynamics. We present a unified framework in which diffusion models, score-based generative models, and flow matching are instances of learning a time-dependent vector field that induces a family of marginals $(ρ_t)_{t \in [0,1]}$ governed by continuity and Fokker-Planck equations. Such a unified theory is timely because these methods are converging methodologically, yet fragmented notation and competing derivations continue to obscure their shared structure and the practical tradeoffs governing sampling, stability, and computation. Within this framework, we (i) derive reverse-time sampling for diffusion and score-based models as controlled stochastic dynamics, (ii) show that the probability flow ODE yields identical marginals and connects diffusion to likelihood-based normalizing flows, and (iii) interpret flow matching as direct regression of the velocity field under a chosen interpolation, clarifying when it coincides with or differs from score-based training. We compare objectives, sampling schemes, and discretization errors under unified notation, discuss connections to Schrodinger bridges and entropic optimal transport, and summarize theoretical guarantees and open problems on approximation, stability, and scalability.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.