ArXiv TLDR

Behavior-Constrained Reinforcement Learning with Receding-Horizon Credit Assignment for High-Performance Control

🐦 Tweet
2604.03023

Siwei Ju, Jan Tauberschmidt, Oleg Arenz, Peter van Vliet, Jan Peters

cs.RO

TLDR

This paper proposes a behavior-constrained RL framework with receding-horizon credit assignment to learn high-performance, expert-consistent control policies.

Key contributions

  • Introduces a behavior-constrained RL framework to improve beyond demonstrations while controlling expert deviation.
  • Uses a receding-horizon predictive mechanism for trajectory-level look-ahead rewards, ensuring short-term consistency.
  • Conditions policies on reference trajectories to model a distribution of expert behaviors, accounting for variability.
  • Achieves competitive lap times in race car simulation, outperforming baselines in both performance and imitation quality.

Why it matters

This method addresses a key challenge in robotics by enabling RL to achieve high performance without sacrificing desirable human-like behavior. It provides a robust approach for learning optimal and behavior-consistent control policies, making them reliable surrogates for human decision-making in complex systems.

Original Abstract

Learning high-performance control policies that remain consistent with expert behavior is a fundamental challenge in robotics. Reinforcement learning can discover high-performing strategies but often departs from desirable human behavior, whereas imitation learning is limited by demonstration quality and struggles to improve beyond expert data. We propose a behavior-constrained reinforcement learning framework that improves beyond demonstrations while explicitly controlling deviation from expert behavior. Because expert-consistent behavior in dynamic control is inherently trajectory-level, we introduce a receding-horizon predictive mechanism that models short-term future trajectories and provides look-ahead rewards during training. To account for the natural variability of human behavior under disturbances and changing conditions, we further condition the policy on reference trajectories, allowing it to represent a distribution of expert-consistent behaviors rather than a single deterministic target. Empirically, we evaluate the approach in high-fidelity race car simulation using data from professional drivers, a domain characterized by extreme dynamics and narrow performance margins. The learned policies achieve competitive lap times while maintaining close alignment with expert driving behavior, outperforming baseline methods in both performance and imitation quality. Beyond standard benchmarks, we conduct human-grounded evaluation in a driver-in-the-loop simulator and show that the learned policies reproduce setup-dependent driving characteristics consistent with the feedback of top-class professional race drivers. These results demonstrate that our method enables learning high-performance control policies that are both optimal and behavior-consistent, and can serve as reliable surrogates for human decision-making in complex control systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.