Grounding Video Reasoning in Physical Signals
Alibay Osmanli, Zixu Cheng, Shaogang Gong
TLDR
This paper introduces a new grounded benchmark for physical video understanding, evaluating models on what, when, and where events occur.
Key contributions
- Introduces a grounded benchmark for physical video understanding.
- Extends V-STaR evaluation to four video sources and six physics domains.
- Evaluates models using three prompt families and four input conditions.
- Reveals spatial grounding as the weakest aspect across different settings.
Why it matters
Current video understanding models often fail at localizing physical events despite correct naming. This paper addresses this by providing a robust, physically grounded benchmark. It helps diagnose model weaknesses, especially in spatial reasoning, pushing towards more comprehensive video Q&A.
Original Abstract
Physical video understanding requires more than naming an event correctly. A model can answer a question about pouring, sliding, or collision from textual regularities while still failing to localize the event in time or space. We introduce a grounded benchmark for physical video understanding that extends the what--when--where evaluation structure of V-STaR to four video sources, six physics domains, three prompt families (physics, vstar_like, and neutral_rstr), and four input conditions (original, shuffled, ablated, and frame-masked). The benchmark contains 1,560 base video clips from SSV2, YouCook2, HoloAssist, and Roundabout-TAU. Each clip is first converted into a shared grounded event record, and the three query families are derived from that record. Temporal and spatial targets are shared across prompt families, while the non-physics families use deterministic family-appropriate semantic a_what targets derived from the same record. Across models and prompt families, physics remains the strongest regime overall, vstar_like is the clearest non-physics semantic comparison, and neutral_rstr behaves as a harder templated control. Prompt-family robustness is selective rather than universal, perturbation gains cluster in weak original cases, and spatial grounding is the weakest across settings. These results suggest that video Q&A reasoning benchmarks shall report physically grounded, prompt-aware, and perturbation-aware diagnostics alongside aggregate accuracy.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.