ArXiv TLDR

Exploring High-Order Self-Similarity for Video Understanding

🐦 Tweet
2604.20760

Manjin Kim, Heeseung Kwon, Karteek Alahari, Minsu Cho

cs.CV

TLDR

MOSS module leverages multi-order self-similarity to improve video understanding with minimal cost.

Key contributions

  • Introduces Multi-Order Self-Similarity (MOSS) module for capturing diverse temporal dynamics.
  • MOSS enhances motion modeling with low computational and memory overhead.
  • Validated on action recognition, video VQA, and robotic tasks with consistent gains.
  • Source code and checkpoints to be publicly released for broad adoption.

Why it matters

This paper advances video understanding by modeling complex temporal patterns efficiently. MOSS's broad applicability and low cost make it valuable for diverse video analysis tasks.

Original Abstract

Space-time self-similarity (STSS), which captures visual correspondences across frames, provides an effective way to represent temporal dynamics for video understanding. In this work, we explore higher-order STSS and demonstrate how STSSs at different orders reveal distinct aspects of these dynamics. We then introduce the Multi-Order Self-Similarity (MOSS) module, a lightweight neural module designed to learn and integrate multi-order STSS features. It can be applied to diverse video tasks to enhance motion modeling capabilities while consuming only marginal computational cost and memory usage. Extensive experiments on video action recognition, motion-centric video VQA, and real-world robotic tasks consistently demonstrate substantial improvements, validating the broad applicability of MOSS as a general temporal modeling module. The source code and checkpoints will be publicly available.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.