ArXiv TLDR

Power Reinforcement Post-Training of Text-to-Image Models with Super-Linear Advantage Shaping

🐦 Tweet
2605.10937

Haoyuan Sun, Jing Wang, Yuxin Song, Yu Lu, Bo Fang + 7 more

cs.CV

TLDR

SLAS improves text-to-image models by using a novel super-linear advantage shaping to mitigate reward hacking and enhance training efficiency and robustness.

Key contributions

  • Introduces Super-Linear Advantage Shaping (SLAS) for text-to-image post-training.
  • Uses a non-linear geometric structure to amplify high-advantage updates and suppress noise.
  • Mitigates reward hacking, improving out-of-domain performance and model robustness.
  • Achieves faster training dynamics while preserving semantic and compositional fidelity.

Why it matters

Reinforcement learning post-training for T2I models often suffers from reward hacking, limiting genuine performance gains. This paper introduces SLAS, a novel approach that addresses this by reshaping the policy space. It leads to more robust, efficient, and reliable improvements for text-to-image generation.

Original Abstract

Recently, post-training methods based on reinforcement learning, with a particular focus on Group Relative Policy Optimization (GRPO), have emerged as the robust paradigm for further advancement of text-to-image (T2I) models. However, these methods are often prone to reward hacking, wherein models exploit biases in imperfect reward functions rather than yielding genuine performance gains. In this work, we identify that normalization could lead to miscalibration and directly removing the prompt-level standard deviation term yields an optimal policy ascent direction that is linear in the advantage but still limits the separation of genuine signals from noise. To mitigate the above issues, we propose Super-Linear Advantage Shaping (SLAS) by revisiting the functional update from an information geometry perspective. By extending the Fisher-Rao information metric with advantage-dependent weighting, SLAS introduces a non-linear geometric structure that reshapes the local policy space. This design relaxes constraints along high-advantage directions to amplify informative updates, while tightening those in low-advantage regions to suppress illusory gradients. In addition, batch-level normalization is applied to stabilize training under varying reward scales. Extensive evaluations demonstrate that SLAS consistently surpasses the DanceGRPO baseline across multiple backbones and benchmarks. In particular, it yields faster training dynamics, improved out-of-domain performance on GenEval and UniGenBench++, and enhanced robustness to model scaling, while mitigating reward hacking and preserving semantic and compositional fidelity in generations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.