Deep reinforcement learning from human preferences
Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg + 1 more
TLDR
This paper demonstrates that deep reinforcement learning agents can be effectively trained using human preferences as feedback instead of explicit reward functions, enabling complex task learning with minimal human input.
Key contributions
- Introduces a method to train RL agents using human preferences between trajectory segments rather than predefined rewards.
- Successfully applies this approach to complex tasks like Atari games and simulated robot locomotion with less than 1% human feedback.
- Shows that complex novel behaviors can be learned with about an hour of human time, significantly reducing oversight costs.
Why it matters
This work matters because it addresses a key challenge in reinforcement learning: specifying complex goals in real-world environments. By leveraging human preferences instead of explicit reward functions, it enables training sophisticated agents with minimal and practical human input, broadening the applicability of RL to tasks where reward design is difficult or infeasible.
Original Abstract
For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.