ArXiv TLDR

Cross-Stage Coherence in Hierarchical Driving VQA: Explicit Baselines and Learned Gated Context Projectors

🐦 Tweet
2604.22560

Gautam Kumar Jain, Carsten Markgraf, Julian Stähler

cs.CVcs.AI

TLDR

This paper explores explicit prompt-based and implicit gated context projection methods to enhance cross-stage coherence in hierarchical driving VQA.

Key contributions

  • Evaluates explicit prompt-based context passing, reducing NLI contradiction by up to 42.6% on a 4B VLM without training.
  • Introduces implicit gated context projectors, cutting planning-stage NLI contradiction by 34% and boosting entailment by 50%.
  • Presents a comparative study showing explicit methods provide strong surface consistency, while implicit methods yield significant semantic gains.

Why it matters

Ensuring consistent reasoning across stages is crucial for reliable autonomous driving VQA systems. This paper provides practical, effective methods—both training-free and learned—to improve cross-stage coherence. It establishes strong baselines and highlights paths for future domain adaptation.

Original Abstract

Graph Visual Question Answering (GVQA) for autonomous driving organizes reasoning into ordered stages, namely Perception, Prediction, and Planning, where planning decisions should remain consistent with the model's own perception. We present a comparative study of cross-stage context passing on DriveLM-nuScenes using two complementary mechanisms. The explicit variant evaluates three prompt-based conditioning strategies on a domain-adapted 4B VLM (Mini-InternVL2-4B-DA-DriveLM) without additional training, reducing NLI contradiction by up to 42.6% and establishing a strong zero-training baseline. The implicit variant introduces gated context projectors, which extract a hidden-state vector from one stage and inject a normalized, gated projection into the next stage's input embeddings. These projectors are jointly trained with stage-specific QLoRA adapters on a general-purpose 8B VLM (InternVL3-8B-Instruct) while updating only approximately 0.5% of parameters. The implicit variant achieves a statistically significant 34% reduction in planning-stage NLI contradiction (bootstrap 95% CIs, p < 0.05) and increases cross-stage entailment by 50%, evaluated with a multilingual NLI classifier to account for mixed-language outputs. Planning language quality also improves (CIDEr +30.3%), but lexical overlap and structural consistency degrade due to the absence of driving-domain pretraining. Since the two variants use different base models, we present them as complementary case studies: explicit context passing provides a strong training-free baseline for surface consistency, while implicit gated projection delivers significant planning-stage semantic gains, suggesting domain adaptation as a plausible next ingredient for full-spectrum improvement.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.