Connected Dependability Cage: Run-Time Function and Anomaly Monitoring for the Development and Operation of Safe Automated Vehicles
Iqra Aslam, Nour Habib, Abhishek Buragohain, Meng Zhang, Andreas Rausch + 2 more
TLDR
The Connected Dependability Cage enhances automated vehicle safety with fail-operational AI perception, using function and anomaly monitors for robust operation.
Key contributions
- Introduces Connected Dependability Cage for fail-operational AI perception in automated vehicles.
- Features a Function Monitor to detect inconsistencies across diverse AI perception pipelines.
- Includes an Anomaly Monitor to identify unknown objects and novel scenes for AI reliability.
- Supports graceful degradation to minimal-risk maneuvers and automated data recording for continuous improvement.
Why it matters
This paper addresses critical safety challenges in automated vehicles, particularly for AI perception in unpredictable environments. It proposes a novel framework for fail-operational behavior, enhancing reliability beyond traditional functional safety. Validated through vehicle testing, this system improves continuous development and safe operation.
Original Abstract
The advancement of automated vehicles introduces complex safety challenges, particularly in dynamic and unpredictable environments where AI-enabled perception systems must operate reliably. Ensuring compliance with safety standards such as ISO 26262 and ISO/PAS 21448 (SOTIF) is essential for addressing system malfunctions and mitigating unsafe behavior in unknown scenarios. However, as automation levels increase, vehicles must go beyond conventional functional safety by incorporating fail-operational capabilities that enable continued safe operation during system or component failures and the handling of unfamiliar or degraded operational conditions. To address these safety concerns, we propose the Connected Dependability Cage, an architectural framework designed to enable hierarchical fail-operational behavior in AI-enabled perception systems. This framework integrates two complementary monitoring mechanisms: a Function Monitor that oversees multiple heterogeneous AI-based perception pipelines and detects inconsistencies through a voting mechanism, and an Anomaly Monitor that evaluates the reliability of AI perception by detecting unknown or novel objects in scenes that may be excluded from the training dataset. In the presence of critical discrepancies, the system supports graceful degradation, ultimately enabling a transition to a minimal-risk maneuver strategy. Furthermore, whenever either monitor raises a safety flag, an automated data recording process is initiated to facilitate iterative system development and continuous improvement. Both monitors have been implemented and validated through extensive vehicle testing, demonstrating their practical effectiveness in real-world applications.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.