ArXiv TLDR

Distorted or Fabricated? A Survey on Hallucination in Video LLMs

🐦 Tweet
2604.12944

Yiyang Huang, Yitian Zhang, Yizhou Wang, Mingyuan Zhang, Liang Shi + 2 more

cs.CVcs.AI

TLDR

This survey categorizes and analyzes hallucinations in Video LLMs, detailing their types, causes, evaluation, and mitigation strategies.

Key contributions

  • Presents a systematic taxonomy of Vid-LLM hallucinations: dynamic distortion and content fabrication.
  • Reviews current evaluation benchmarks, metrics, and mitigation strategies for hallucinations.
  • Identifies root causes as limited temporal representation and insufficient visual grounding.
  • Suggests future work like motion-aware visual encoders and counterfactual learning techniques.

Why it matters

This survey provides a crucial systematic understanding of hallucinations in Vid-LLMs, essential for developing more robust and reliable video-language systems. It consolidates scattered research, offering a clear roadmap for future advancements in the field.

Original Abstract

Despite significant progress in video-language modeling, hallucinations remain a persistent challenge in Video Large Language Models (Vid-LLMs), referring to outputs that appear plausible yet contradict the content of the input video. This survey presents a comprehensive analysis of hallucinations in Vid-LLMs and introduces a systematic taxonomy that categorizes them into two core types: dynamic distortion and content fabrication, each comprising two subtypes with representative cases. Building on this taxonomy, we review recent advances in the evaluation and mitigation of hallucinations, covering key benchmarks, metrics, and intervention strategies. We further analyze the root causes of dynamic distortion and content fabrication, which often result from limited capacity for temporal representation and insufficient visual grounding. These insights inform several promising directions for future work, including the development of motion-aware visual encoders and the integration of counterfactual learning techniques. This survey consolidates scattered progress to foster a systematic understanding of hallucinations in Vid-LLMs, laying the groundwork for building robust and reliable video-language systems. An up-to-date curated list of related works is maintained at https://github.com/hukcc/Awesome-Video-Hallucination .

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.