Learning Responsibility-Attributed Adversarial Scenarios for Testing Autonomous Vehicles
Yizhuo Xiao, Haotian Yan, Ying Wang, Zhongpan Zhu, Yuxin Zhang + 3 more
TLDR
CARS generates responsibility-attributed adversarial scenarios for autonomous vehicles, distinguishing system failures from unavoidable traffic conflicts for better safety assurance.
Key contributions
- Integrates responsibility attribution directly into adversarial scenario generation for AVs.
- Combines context-aware adversary selection with a generative adversarial policy in closed-loop simulation.
- Discovers physically feasible collision scenarios with high diagnostic attribution rates.
- Enables interpretable, regulation-aligned safety evidence for scalable autonomous driving system validation.
Why it matters
This paper addresses a critical gap in autonomous vehicle safety testing by providing a method to attribute collision responsibility. It moves beyond simple collision detection to generate interpretable, regulation-aligned safety evidence, crucial for building trustworthy autonomous driving systems.
Original Abstract
Establishing trustworthy safety assurance for autonomous driving systems (ADSs) requires evidence that failures arise from avoidable system deficiencies rather than unavoidable traffic conflicts. Current adversarial simulation methods can efficiently expose collisions, but generally lack mechanisms to distinguish these fundamentally different failure modes. Here we present CARS (Context-Aware, Responsibility-attributed Scenario generation), a framework that integrates responsibility attribution directly into adversarial scenario generation. CARS combines context-aware adversary selection with a generative adversarial policy optimized in closed-loop simulation to construct collision scenarios that are both physically feasible and diagnostically attributable. Across benchmark datasets spanning heterogeneous national traffic environments, CARS consistently discovers feasible collision scenarios with high attribution rates under multiple regulation-prescribed careful and competent driver models. By coupling adversarial generation with normative responsibility assessment, CARS moves simulation testing beyond collision discovery toward the construction of interpretable, regulation-aligned safety evidence for scalable ADS validation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.