Understanding the Role of Hallucination in Reinforcement Post-Training of Multimodal Reasoning Models
Gengwei Zhang, Jie Peng, Zhen Tan, Mufan Qiu, Hossein Nourkhiz Mahjoub + 4 more
TLDR
This paper introduces a framework to analyze how hallucination impacts RL post-training in multimodal reasoning models, revealing its significant role.
Key contributions
- Proposes "Hallucination-as-Cue Framework" to analyze RL post-training effects on MLLMs.
- Uses hallucination-inductive corruptions to force models to reason by hallucination during training.
- Reveals hallucination significantly improves MLLM reasoning performance in RL post-training.
- Demonstrates hallucination-driven RL can sometimes outperform standard MLLM training.
Why it matters
This work challenges prevailing assumptions about how MLLMs learn visual reasoning through RL. By showing that hallucination can be a beneficial cue, it motivates the development of more effective, modality-aware RL-based training designs. This could lead to more robust and truly multimodal AI systems.
Original Abstract
The recent success of reinforcement learning (RL) in large reasoning models has inspired the growing adoption of RL for post-training Multimodal Large Language Models (MLLMs) to enhance their visual reasoning capabilities. Although many studies have reported improved performance, it remains unclear whether RL training truly enables models to learn from visual information. In this work, we propose the Hallucination-as-Cue Framework, an analytical framework designed to investigate the effects of RL-based post-training on multimodal reasoning models from the perspective of model hallucination. Specifically, we introduce hallucination-inductive, modality-specific corruptions that remove or replace essential information required to derive correct answers, thereby forcing the model to reason by hallucination. By applying these corruptions during both training and evaluation, our framework provides a unique perspective for diagnosing RL training dynamics and understanding the intrinsic properties of datasets. Through extensive experiments and analyses across multiple multimodal reasoning benchmarks, we reveal that the role of model hallucination for RL-training is more significant than previously recognized. For instance, we find that RL post-training under purely hallucination-inductive settings can still significantly improve models' reasoning performance, and in some cases even outperform standard training. These findings challenge prevailing assumptions about MLLM reasoning training and motivate the development of more modality-aware RL-based training designs.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.