ArXiv TLDR

UniversalVTG: A Universal and Lightweight Foundation Model for Video Temporal Grounding

🐦 Tweet
2604.08522

Joungbin An, Agrim Jain, Kristen Grauman

cs.CV

TLDR

UniversalVTG is a lightweight, universal foundation model for video temporal grounding that achieves SOTA performance and outperforms MLLMs.

Key contributions

  • Introduces UniversalVTG, a single, lightweight model for video temporal grounding.
  • Employs a Query Unifier to handle diverse query formats, preventing negative transfer.
  • Scales efficiently to long, untrimmed videos with an optimized grounding head.
  • Achieves SOTA results on multiple benchmarks, outperforming larger MLLMs.

Why it matters

This paper addresses the limitations of dataset-specific VTG models and the high cost of MLLMs. UniversalVTG provides a universal, lightweight, and highly effective solution, making advanced video temporal grounding more practical and accessible.

Original Abstract

Video temporal grounding (VTG) is typically tackled with dataset-specific models that transfer poorly across domains and query styles. Recent efforts to overcome this limitation have adapted large multimodal language models (MLLMs) to VTG, but their high compute cost and limited video context still hinder long-video grounding. We instead scale unified supervision while keeping the model lightweight. We present UniversalVTG, a single VTG model trained with large-scale cross-dataset pretraining. An offline Query Unifier canonicalizes heterogeneous query formats into a shared declarative space, reducing linguistic mismatch and preventing the negative transfer observed under naïve joint training. Combined with an efficient grounding head, UniversalVTG scales to long, untrimmed videos. Across diverse benchmarks-GoalStep-StepGrounding, Ego4D-NLQ, TACoS, Charades-STA, and ActivityNet-Captions-one UniversalVTG checkpoint achieves state-of-the-art performance versus dedicated VTG models. Moreover, despite being $>100\times$ smaller than recent MLLM-based approaches, UniversalVTG matches or exceeds their accuracy on multiple benchmarks, offering a practical alternative to parameter-heavy MLLMs.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.