High-Speed Vision Improves Zero-Shot Semantic Understanding of Human Actions
TLDR
High-speed vision significantly improves zero-shot semantic understanding of rapid human actions by enhancing temporal resolution for fine-grained motions.
Key contributions
- Investigates how temporal resolution affects zero-shot understanding of rapid human actions.
- Proposes a training-free pipeline combining VLM for semantics and LLM for action comparison.
- Demonstrates higher frame rates (e.g., 120 Hz) significantly improve zero-shot semantic separability.
- Shows high-speed video provides more stable semantic representations for fast actions.
Why it matters
This paper highlights the critical role of high temporal resolution in zero-shot action recognition, especially for rapid and subtle human movements. It offers a practical, training-free approach for semantic understanding without extensive labeled data. The findings suggest that high-speed perception can significantly enhance AI's ability to interpret complex human actions.
Original Abstract
Understanding human actions from visual observations is essential for human--robot interaction, particularly when semantic interpretation of unfamiliar or hard-to-annotate actions is required. In scenarios such as rapid and less common activities, collecting sufficient labeled data for supervised learning is challenging, making zero-shot approaches a practical alternative for semantic understanding without task-specific training. While recent advances in large-scale pretrained models enable such zero-shot reasoning, the impact of temporal resolution, especially for rapid and fine-grained motions, remains underexplored. In this study, we investigate how temporal resolution affects zero-shot semantic understanding of high-speed human actions. Using kendo as a representative case of rapid and subtle motion patterns, we propose a training-free pipeline that combines a pre-trained video-language model for semantic representation with large language model-based reasoning for pairwise action comparison. Through controlled experiments across multiple frame rates (120 Hz, 60 Hz, and 30 Hz), we show that higher temporal resolution significantly improves semantic separability in zero-shot settings. We further analyze the role of tracking-based human joint information under both full and partial observation scenarios. Quantitative evaluation using a nearest-class prototype strategy demonstrates that high-speed video provides more stable and interpretable semantic representations for fast actions. These findings highlight the importance of temporal resolution in training-free action recognition and suggest that high-speed perception can enhance semantic understanding capabilities.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.