Learning Control Policies to Provably Satisfy Hard Affine Constraints for Black-Box Hybrid Dynamical Systems
Aayushi Shrivastava, Kartik Nagpal, Sairam Jinkala, Jean-Baptiste Bouvier, Negar Mehr
TLDR
This paper introduces a novel RL method that provably satisfies hard affine safety constraints for black-box hybrid systems, even with state jumps.
Key contributions
- Develops an RL method for black-box hybrid systems to provably satisfy hard affine state constraints.
- Forces RL policies to be affine and repulsive near boundaries to guarantee constraint satisfaction.
- Introduces a second repulsive region before resets to prevent post-jump constraint violations.
- Outperforms state-of-the-art safe RL and CBF methods in maintaining safety and policy quality.
Why it matters
Ensuring safety in complex, unknown hybrid systems is crucial for real-world applications. This work offers a robust, provably safe RL framework that handles continuous dynamics and state jumps, advancing safe learning for systems with unknown dynamics. It provides stronger safety guarantees than existing methods.
Original Abstract
Ensuring safety for black-box hybrid dynamical systems presents significant challenges due to their instantaneous state jumps and unknown explicit nonlinear dynamics. Existing solutions for strict safety constraint satisfaction, like control barrier functions (CBFs) and reachability analysis, rely on direct knowledge of the dynamics. Similarly, safe reinforcement learning (RL) approaches often rely on known system dynamics or merely discourage safety violations through reward shaping. In this work, we want to learn RL policies which provably satisfy affine state constraints in closed loop for black-box hybrid dynamical systems with affine reset maps. Our key insight is forcing the RL policy to be affine and repulsive near the constraint boundaries for the unknown nonlinear dynamics of the system, providing guarantees that the trajectories will not violate the constraint. We further account for constraint violation due to instantaneous state jumps that occur due to impacts or reset maps in the hybrid system by introducing a second repulsive affine region before the reset that prevents post-reset states from violating the constraint. We derive sufficient conditions under which these policies satisfy safety constraints in closed loop. We also compare our approach with state-of-the-art reward shaping and learned-CBF methods on hybrid dynamical systems like the constrained pendulum and paddle juggler environments. In both scenarios, we show that our methodology learns higher quality policies while always satisfying the safety constraints.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.