Proximal Policy Optimization Algorithms
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
TLDR
Proximal Policy Optimization (PPO) introduces a simpler, more efficient policy gradient method that improves sample complexity and performance across various reinforcement learning tasks.
Key contributions
- Proposes a novel surrogate objective enabling multiple minibatch updates per data sample.
- Combines benefits of Trust Region Policy Optimization (TRPO) with simpler implementation and broader applicability.
- Demonstrates superior empirical performance on benchmark tasks like robotic locomotion and Atari games.
Why it matters
This paper matters because it presents a reinforcement learning algorithm that balances ease of implementation, computational efficiency, and strong empirical results, making advanced policy optimization techniques more accessible and practical for a wide range of applications.
Original Abstract
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.