ArXiv TLDR

BATON: A Multimodal Benchmark for Bidirectional Automation Transition Observation in Naturalistic Driving

🐦 Tweet
2604.07263

Yuhang Wang, Yiyao Xu, Chaoyun Yang, Lingyao Li, Jingran Sun + 1 more

cs.HCcs.CVcs.MM

TLDR

BATON is a new multimodal dataset and benchmark for predicting driver handover and takeover events in naturalistic automated driving.

Key contributions

  • Introduces BATON, a large-scale naturalistic dataset (127 drivers, 136.6 hours) for real-world DA usage.
  • Synchronizes multimodal data: front-view/in-cabin video, CAN bus, radar, and GPS for rich context.
  • Defines benchmark tasks for driving action understanding, handover, and takeover prediction.
  • Demonstrates multimodal data (CAN, route context) is crucial for accurate transition prediction.

Why it matters

Predicting driver handovers and takeovers is vital for safer, more intuitive automated driving systems. This paper provides a unique multimodal dataset and benchmark to address current data gaps. Its findings offer direct implications for designing proactive human-machine interfaces, improving user experience and safety.

Original Abstract

Existing driving automation (DA) systems on production vehicles rely on human drivers to decide when to engage DA while requiring them to remain continuously attentive and ready to intervene. This design demands substantial situational judgment and imposes significant cognitive load, leading to steep learning curves, suboptimal user experience, and safety risks from both over-reliance and delayed takeover. Predicting when drivers hand over control to DA and when they take it back is therefore critical for designing proactive, context-aware HMI, yet existing datasets rarely capture the multimodal context, including road scene, driver state, vehicle dynamics, and route environment. To fill this gap, we introduce BATON, a large-scale naturalistic dataset capturing real-world DA usage across 127 drivers, and 136.6 hours of driving. The dataset synchronizes front-view video, in-cabin video, decoded CAN bus signals, radar-based lead-vehicle interaction, and GPS-derived route context, forming a closed-loop multimodal record around each control transition. We define three benchmark tasks: driving action understanding, handover prediction, and takeover prediction, and evaluate baselines spanning sequence models, classical classifiers, and zero-shot VLMs. Results show that visual input alone is insufficient for reliable transition prediction: front-view video captures road context but not driver state, while in-cabin video reflects driver readiness but not the external scene. Incorporating CAN and route-context signals substantially improves performance over video-only settings, indicating strong complementarity across modalities. We further find takeover events develop more gradually and benefit from longer prediction horizons, whereas handover events depend more on immediate contextual cues, revealing an asymmetry with direct implications for HMI design in assisted driving systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.