ArXiv TLDR

Find, Fix, Reason: Context Repair for Video Reasoning

🐦 Tweet
2604.16243

Haojian Huang, Chuanyu Qin, Yinchuan Li, Yingcong Chen

cs.CV

TLDR

Find, Fix, Reason (FFR) introduces a teacher-student model for video reasoning that repairs context by providing missing spatiotemporal evidence.

Key contributions

  • FFR uses a frozen, tool-integrated teacher to identify and provide missing spatiotemporal evidence patches.
  • The student re-answers with this added context, updating via a chosen-rollout scheme in GRPO.
  • Proposes Robust Improvement Reward (RIR) to align optimization with correct answers and evidence-based rationales.

Why it matters

This paper addresses limitations in current video reasoning pipelines by enabling larger models to supply richer, targeted context to smaller models. It improves accuracy and generalization by repairing context dynamically, leading to more robust and causally meaningful video understanding.

Original Abstract

Reinforcement learning has advanced video reasoning in large multi-modal models, yet dominant pipelines either rely on on-policy self-exploration, which plateaus at the model's knowledge boundary, or hybrid replay that mixes policies and demands careful regularization. Dynamic context methods zoom into focused evidence but often require curated pretraining and two-stage tuning, and their context remains bounded by a small model's capability. In contrast, larger models excel at instruction following and multi-modal understanding, can supply richer context to smaller models, and rapidly zoom in on target regions via simple tools. Building on this capability, we introduce an observation-level intervention: a frozen, tool-integrated teacher identifies the missing spatiotemporal dependency and provides a minimal evidence patch (e.g., timestamps, regions etc.) from the original video while the question remains unchanged. The student answers again with the added context, and training updates with a chosen-rollout scheme integrated into Group Relative Policy Optimization (GRPO). We further propose a Robust Improvement Reward (RIR) that aligns optimization with two goals: outcome validity through correct answers and dependency alignment through rationales that reflect the cited evidence. Advantages are group-normalized across the batch, preserving on-policy exploration while directing it along causally meaningful directions with minimal changes to the training stack. Experiments on various related benchmarks show consistent accuracy gains and strong generalization. Web page and source code will be available at https://github.com/JethroJames/FFR.git.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.