AlphaGRPO: Unlocking Self-Reflective Multimodal Generation in UMMs via Decompositional Verifiable Reward
Runhui Huang, Jie Wu, Rui Yang, Zhe Liu, Hengshuang Zhao
TLDR
AlphaGRPO enhances multimodal generation in UMMs using GRPO and a novel Decompositional Verifiable Reward for self-reflection and reasoning.
Key contributions
- Applies GRPO to UMMs, boosting multimodal generation without a cold-start stage.
- Enables advanced reasoning for text-to-image generation and self-reflective output refinement.
- Introduces DVReward, using LLMs to decompose requests for verifiable, interpretable feedback.
- Achieves robust improvements on benchmarks and strong gains in editing tasks like GEdit.
Why it matters
AlphaGRPO tackles stable supervision in multimodal generation by enabling models to self-reflect and reason. Its novel DVReward provides interpretable feedback, leading to more accurate, user-aligned outputs. This significantly improves UMM reliability and capabilities for real-world applications.
Original Abstract
In this paper, we propose AlphaGRPO, a novel framework that applies Group Relative Policy Optimization (GRPO) to AR-Diffusion Unified Multimodal Models (UMMs) to enhance multimodal generation capabilities without an additional cold-start stage. Our approach unlocks the model's intrinsic potential to perform advanced reasoning tasks: Reasoning Text-to-Image Generation, where the model actively infers implicit user intents, and Self-Reflective Refinement, where it autonomously diagnoses and corrects misalignments in generated outputs. To address the challenge of providing stable supervision for real-world multimodal generation, we introduce the Decompositional Verifiable Reward (DVReward). Unlike holistic scalar rewards, DVReward utilizes an LLM to decompose complex user requests into atomic, verifiable semantic and quality questions, which are then evaluated by a general MLLM to provide reliable and interpretable feedback. Extensive experiments demonstrate that AlphaGRPO yields robust improvements across multimodal generation benchmarks, including GenEval, TIIF-Bench, DPG-Bench and WISE, while also achieving significant gains in editing tasks on GEdit without training on editing tasks. These results validate that our self-reflective reinforcement approach effectively leverages inherent understanding to guide high-fidelity generation. Project page: https://huangrh99.github.io/AlphaGRPO/
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.