ArXiv TLDR

Normalizing Trajectory Models

🐦 Tweet
2605.08078

Jiatao Gu, Tianrong Chen, Ying Shen, David Berthelot, Shuangfei Zhai + 1 more

cs.CVcs.LG

TLDR

Normalizing Trajectory Models (NTM) use conditional normalizing flows for few-step diffusion, achieving high-quality samples with exact likelihood.

Key contributions

  • Models each reverse step as a conditional normalizing flow for expressive transitions.
  • Enables exact likelihood training over the entire generative trajectory.
  • Combines invertible blocks with a deep parallel predictor for end-to-end training.
  • Achieves high-quality 4-step samples via self-distillation, outperforming baselines.

Why it matters

Existing few-step diffusion methods often sacrifice exact likelihood. NTM uniquely solves this by using normalizing flows, offering both efficiency and theoretical soundness. This approach enables high-quality image generation in just four steps.

Original Abstract

Diffusion-based models decompose sampling into many small Gaussian denoising steps -- an assumption that breaks down when generation is compressed to a few coarse transitions. Existing few-step methods address this through distillation, consistency training, or adversarial objectives, but sacrifice the likelihood framework in the process. We introduce Normalizing Trajectory Models (NTM), which models each reverse step as an expressive conditional normalizing flow with exact likelihood training. Architecturally, NTM combines shallow invertible blocks within each step with a deep parallel predictor across the trajectory, forming an end-to-end network trainable from scratch or initializable from pretrained flow-matching models. Its exact trajectory likelihood further enables self-distillation: a lightweight denoiser trained on the model's own score produces high-quality samples in four steps. On text-to-image benchmarks, NTM matches or outperforms strong image generation baselines in just four sampling steps while uniquely retaining exact likelihood over the generative trajectory.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.