Reward Design for Physical Reasoning in Vision-Language Models
Derek Lilienthal, Manisha Mukherjee, Sameera Horawalavithana
TLDR
This paper explores how different reward designs impact Vision-Language Models' physical reasoning, finding accuracy-based rewards yield the strongest gains.
Key contributions
- Compared four reward signals for GRPO-based VLM training: format, accuracy, rubric, and attention-based.
- Evaluated on PhyX, a 3,000-problem benchmark across six physics domains and reasoning types.
- Accuracy-based rewards provided the strongest overall performance gains for VLMs.
- Novel attention-weight reward improved spatial reasoning accuracy from 0.27 to 0.50 without spatial annotations.
Why it matters
VLMs struggle with physical reasoning, and this work systematically investigates how reward design influences their performance. Understanding these effects is crucial for developing more capable and human-like AI systems. The findings highlight the importance of specific reward types for different reasoning behaviors.
Original Abstract
Physical reasoning over visual inputs demands tight integration of visual perception, domain knowledge, and multi-step symbolic inference. Yet even state-of-the-art Vision Language Models (VLMs) fall far short of human performance on physics benchmarks. While post-training algorithms such as Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) have demonstrated strong reasoning gains in language models, how reward design shapes VLM physical reasoning behavior remains poorly understood. We present a systematic reward ablation study for GRPO-based VLM training on physical reasoning. We compare four reward signals of increasing semantic richness: format compliance, answer accuracy, a composite rubric reward (answer correctness, physics principle identification, and unit consistency), and a novel internal reward derived from model attention weights over input image regions. We evaluate on PhyX, a 3,000-problem benchmark spanning six physics domains and six reasoning types across multiple-choice and open-ended formats, using IBM Granite Vision 3.3 (2B). Across both formats, GRPO with accuracy-based rewards outperforms SFT on most domains, though gains vary substantially by reward type and domain. Reward design does not uniformly improve performance. Instead, it induces domain-specific reasoning behaviors. Accuracy-based rewards provide the strongest overall gains. Rubric rewards improve structured reasoning quality without consistent accuracy improvements. Attention-based rewards enhance spatial reasoning while degrading performance in symbolic domains. Our internal attention-weight reward requires no spatial annotations and improves spatial relation accuracy from 0.27 to 0.50, suggesting that supervising where the model attends during generation is a promising direction for visually grounded physical reasoning.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.