ArXiv TLDR

EgoTL: Egocentric Think-Aloud Chains for Long-Horizon Tasks

🐦 Tweet
2604.09535

Lulin Liu, Dayou Li, Yiqing Liang, Sicong Jiang, Hitesh Vijay + 6 more

cs.CV

TLDR

EgoTL introduces a think-aloud data pipeline to improve VLM performance on long-horizon egocentric tasks by providing accurate human CoT and spatial annotations.

Key contributions

  • Introduces EgoTL, a think-aloud data pipeline for egocentric data using a "say-before-act" protocol.
  • Captures step-by-step goals, spoken reasoning, metric-scale spatial properties, and detailed action tags.
  • Benchmarks VLMs on 100+ long-horizon household tasks, revealing current model limitations.
  • Finetuning with EgoTL data significantly improves VLM long-horizon planning, reasoning, and spatial grounding.

Why it matters

Embodied AI models struggle with long-horizon tasks due to noisy data lacking human reasoning and spatial context. EgoTL introduces a high-quality data pipeline to capture human thought processes and metric-scale spatial information. This is crucial for developing robust egocentric AI assistants that can reliably understand and execute complex real-world instructions.

Original Abstract

Large foundation models have made significant advances in embodied intelligence, enabling synthesis and reasoning over egocentric input for household tasks. However, VLM-based auto-labeling is often noisy because the primary data sources lack accurate human action labels, chain-of-thought (CoT), and spatial annotations; these errors are amplified during long-horizon spatial instruction following. These issues stem from insufficient coverage of minute-long, daily household planning tasks and from inaccurate spatial grounding. As a result, VLM reasoning chains and world-model synthesis can hallucinate objects, skip steps, or fail to respect real-world physical attributes. To address these gaps, we introduce EgoTL. EgoTL builds a think-aloud capture pipeline for egocentric data. It uses a say-before-act protocol to record step-by-step goals and spoken reasoning with word-level timestamps, then calibrates physical properties with metric-scale spatial estimators, a memory-bank walkthrough for scene context, and clip-level tags for navigation instructions and detailed manipulation actions. With EgoTL, we are able to benchmark VLMs and World Models on six task dimensions from three layers and long-horizon generation over minute-long sequences across over 100 daily household tasks. We find that foundation models still fall short as egocentric assistants or open-world simulators. Finally, we finetune foundation models with human CoT aligned with metric labels on the training split of EgoTL, which improves long-horizon planning and reasoning, step-wise reasoning, instruction following, and spatial grounding.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.