ArXiv TLDR

An End-to-End Decision-Aware Multi-Scale Attention-Based Model for Explainable Autonomous Driving

🐦 Tweet
2605.00291

Maryam Sadat Hosseini Azad, Shahriar Baradaran Shokouhi, Amir Abbas Hamidi Imani, Shahin Atakishiyev, Randy Goebel

cs.CVcs.RO

TLDR

This paper introduces a decision-aware, multi-scale attention model for explainable autonomous driving, providing simultaneous, case-specific explanations.

Key contributions

  • Proposes a multi-scale attention model for explainable autonomous driving (XAI).
  • Integrates driving decisions into the reasoning component for case-specific explanations.
  • Introduces a novel "Joint F1 score" metric for robust XAI performance evaluation.
  • Demonstrates superior performance and generalization on BDD-OIA and nu-AR datasets.

Why it matters

Autonomous driving systems critically need explainability for reliability and trust. This model offers a novel, decision-aware approach to XAI, addressing limitations of prior methods. Its new evaluation metric and strong performance on multiple datasets advance the field.

Original Abstract

The application of computer vision is gradually increasing across various domains. They employ deep learning models with a black-box nature. Without the ability to explain the behavior of neural networks, especially their decision-making processes, it is not possible to recognize their efficiency, predict system failures, or effectively implement them in real-world applications. Due to the inevitable use of deep learning in fully automated driving systems, many methods have been proposed to explain their behavior; however, they suffer from flawed reasoning and unreliable metrics, which have prevented a comprehensive understanding of complex models in autonomous vehicles and hindered the development of truly reliable systems. In this study, we propose a multi-scale attention-based model in which driving decisions are fed into the reasoning component to provide case-specific explanations for each decision simultaneously. For quantitative evaluation of our model's performance, we employ the F1-score metric, and also proposed a new metric called the Joint F1 score to demonstrate the accurate and reliable performance of the model in terms of Explainable Artificial Intelligence (XAI). In addition to the BDD-OIA dataset, the nu-AR dataset is utilized to further validate the generalization capability and robustness of the proposed network. The results demonstrate the superiority of our reasoning network over the classic and state-of-the-art models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.