ArXiv TLDR

Flow-OPD: On-Policy Distillation for Flow Matching Models

🐦 Tweet
2605.08063

Zhen Fang, Wenxuan Huang, Yu Zeng, Yiming Zhao, Shuang Chen + 6 more

cs.CVcs.AI

TLDR

Flow-OPD introduces an on-policy distillation framework for Flow Matching text-to-image models, resolving multi-task alignment issues.

Key contributions

  • Addresses reward sparsity and gradient interference in multi-task Flow Matching models.
  • Proposes Flow-OPD, a two-stage framework using specialized teachers and dense trajectory supervision.
  • Introduces Manifold Anchor Regularization (MAR) to prevent aesthetic degradation during alignment.
  • Achieves significant improvements in GenEval and OCR accuracy, outperforming vanilla GRPO.

Why it matters

Flow-OPD offers a scalable solution to critical multi-task alignment issues in text-to-image models, like reward sparsity and gradient interference. It significantly boosts performance metrics while maintaining image quality, paving the way for more generalist and robust generative AI.

Original Abstract

Existing Flow Matching (FM) text-to-image models suffer from two critical bottlenecks under multi-task alignment: the reward sparsity induced by scalar-valued rewards, and the gradient interference arising from jointly optimizing heterogeneous objectives, which together give rise to a 'seesaw effect' of competing metrics and pervasive reward hacking. Inspired by the success of On-Policy Distillation (OPD) in the large language model community, we propose Flow-OPD, the first unified post-training framework that integrates on-policy distillation into Flow Matching models. Flow-OPD adopts a two-stage alignment strategy: it first cultivates domain-specialized teacher models via single-reward GRPO fine-tuning, allowing each expert to reach its performance ceiling in isolation; it then establishes a robust initial policy through a Flow-based Cold-Start scheme and seamlessly consolidates heterogeneous expertise into a single student via a three-step orchestration of on-policy sampling, task-routing labeling, and dense trajectory-level supervision. We further introduce Manifold Anchor Regularization (MAR), which leverages a task-agnostic teacher to provide full-data supervision that anchors generation to a high-quality manifold, effectively mitigating the aesthetic degradation commonly observed in purely RL-driven alignment. Built upon Stable Diffusion 3.5 Medium, Flow-OPD raises the GenEval score from 63 to 92 and the OCR accuracy from 59 to 94, yielding an overall improvement of roughly 10 points over vanilla GRPO, while preserving image fidelity and human-preference alignment and exhibiting an emergent 'teacher-surpassing' effect. These results establish Flow-OPD as a scalable alignment paradigm for building generalist text-to-image models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.