OpenVLThinkerV2: A Generalist Multimodal Reasoning Model for Multi-domain Visual Tasks
Wenbo Hu, Xin Chen, Yan Gao-Tian, Yihe Deng, Nanyun Peng + 1 more
TLDR
OpenVLThinkerV2 introduces Gaussian GRPO and task-level shaping to create a robust multimodal reasoning model, outperforming existing models.
Key contributions
- Gaussian GRPO (G^2RPO) ensures inter-task gradient equity by forcing advantage distributions to ℕ(0,1).
- Mitigates heavy-tail outliers and offers symmetric updates for positive and negative rewards.
- Response length shaping dynamically balances extended reasoning with direct visual grounding.
- Entropy shaping tightly bounds model exploration, preventing both entropy collapse and explosion.
Why it matters
This paper addresses key challenges in developing generalist multimodal models by introducing G^2RPO for stable RL training and novel shaping mechanisms. Its OpenVLThinkerV2 model achieves state-of-the-art performance, pushing the boundaries of open-source multimodal AI.
Original Abstract
Group Relative Policy Optimization (GRPO) has emerged as the de facto Reinforcement Learning (RL) objective driving recent advancements in Multimodal Large Language Models. However, extending this success to open-source multimodal generalist models remains heavily constrained by two primary challenges: the extreme variance in reward topologies across diverse visual tasks, and the inherent difficulty of balancing fine-grained perception with multi-step reasoning capabilities. To address these issues, we introduce Gaussian GRPO (G$^2$RPO), a novel RL training objective that replaces standard linear scaling with non-linear distributional matching. By mathematically forcing the advantage distribution of any given task to strictly converge to a standard normal distribution, $\mathcal{N}(0,1)$, G$^2$RPO theoretically ensures inter-task gradient equity, mitigates vulnerabilities to heavy-tail outliers, and offers symmetric update for positive and negative rewards. Leveraging the enhanced training stability provided by G$^2$RPO, we introduce two task-level shaping mechanisms to seamlessly balance perception and reasoning. First, response length shaping dynamically elicits extended reasoning chains for complex queries while enforce direct outputs to bolster visual grounding. Second, entropy shaping tightly bounds the model's exploration zone, effectively preventing both entropy collapse and entropy explosion. Integrating these methodologies, we present OpenVLThinkerV2, a highly robust, general-purpose multimodal model. Extensive evaluations across 18 diverse benchmarks demonstrate its superior performance over strong open-source and leading proprietary frontier models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.