ArXiv TLDR

Stream-R1: Reliability-Perplexity Aware Reward Distillation for Streaming Video Generation

🐦 Tweet
2605.03849

Bin Wu, Mengqi Huang, Shaojin Wu, Weinan Jia, Yuxin Wang + 2 more

cs.CV

TLDR

Stream-R1 introduces reliability-perplexity aware reward distillation to improve streaming video generation by adaptively reweighting supervision.

Key contributions

  • Addresses limitations of indiscriminate distillation in streaming video diffusion models.
  • Introduces Stream-R1, a reward-guided framework for adaptive reweighting of distillation objectives.
  • Reweights loss at rollout (Inter-Reliability) and spatiotemporal (Intra-Perplexity) levels using a shared reward.
  • Improves visual quality, motion, and text alignment on benchmarks without architectural changes or inference cost.

Why it matters

This paper significantly advances streaming video generation by refining the distillation process. By adaptively focusing on reliable and impactful supervision, Stream-R1 overcomes limitations of prior methods. This leads to higher quality video outputs and more practical models for real-world applications.

Original Abstract

Distillation-based acceleration has become foundational for making autoregressive streaming video diffusion models practical, with distribution matching distillation (DMD) as the de facto choice. Existing methods, however, train the student to match the teacher's output indiscriminately, treating every rollout, frame, and pixel as equally reliable supervision. We argue that this caps distilled quality, since it overlooks two complementary axes of variance in DMD supervision: Inter-Reliability across student rollouts whose supervision varies in reliability, and Intra-Perplexity across spatial regions and temporal frames that contribute unequally to where quality can still be improved. The objective thus conflates two questions under a uniform weight: whether to learn from each rollout, and where to concentrate optimization within it. To address this, we propose Stream-R1, a Reliability-Perplexity Aware Reward Distillation framework that adaptively reweights the distillation objective at both rollout and spatiotemporal-element levels through a single shared reward-guided mechanism. At the Inter-Reliability level, Stream-R1 rescales each rollout's loss by an exponential of a pretrained video reward score, so that rollouts with reliable supervision dominate optimization. At the Intra-Perplexity level, it back-propagates the same reward model to extract per-pixel gradient saliency, which is factored into spatial and temporal weights that concentrate optimization pressure on regions and frames where refinement yields the largest expected gain. An adaptive balancing mechanism prevents any single quality axis from dominating across visual quality, motion quality, and text alignment. Stream-R1 attains consistent improvements on all three dimensions over distillation baselines on standard streaming video generation benchmarks, without architectural modification or additional inference cost.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.