ArXiv TLDR

Beyond Negative Rollouts: Positive-Only Policy Optimization with Implicit Negative Gradients

🐦 Tweet
2605.06650

Mingwei Xu, Hao Fang

cs.CL

TLDR

POPO is a novel RLVR framework for LLMs that learns exclusively from positive rollouts, achieving strong performance by implicitly deriving negative gradients.

Key contributions

  • Introduces Positive-Only Policy Optimization (POPO) for RLVR, learning solely from positive rollouts.
  • Utilizes bounded importance sampling and rollout redistribution to derive implicit negative gradients.
  • Stabilizes training with a siamese policy network and a bounded similarity penalty term.
  • Achieves comparable or superior performance to GRPO on LLM mathematical benchmarks.

Why it matters

Traditional RLVR methods struggle with negative rollouts due to their vastness and lack of failure gradation. POPO offers a more efficient and robust approach by focusing only on positive examples, making LLM reasoning training more scalable and effective. This could significantly improve the development of advanced LLMs.

Original Abstract

Reinforcement learning with verifiable rewards (RLVR), due to the deterministic verification, becomes a dominant paradigm for enhancing the reasoning ability of large language models (LLMs). The community witnesses the rapid change from the Proximal Policy Optimization (PPO) to Group Relative Policy Optimization (GRPO), in which GRPO reduces the complicated advantage estimation with simple estimation over grouped positive and negative rollouts. However, we note that negative rollouts may admit no gradation of failure severity, and the combinatorial vastness makes penalizing a few sampled negatives unlikely to cover a meaningful reward signal under sparse binary rewards. In this work, we propose Positive-Only Policy Optimization (POPO), a novel RLVR framework in which learning can occur exclusively via online positive rollouts. Specifically, POPO utilizes bounded importance sampling over the positive rollout set. Thus, no disjoint negative rollouts are used for the gradient guidance. We show that implicit negative gradients can emerge naturally through reinforcing the positive probability via rollouts redistribution. Next, POPO stabilizes the policy optimization through two mechanisms. First, it applies a siamese policy network with a momentum-based adaptation law for stabilized policy evolution. Second, we replace the KL-divergence with a bounded similarity penalty term in the siamese representation space. We conduct extensive experiments using publicly available, well-established text-LLM models, e.g., the Qwen family, across all-level mathematical benchmarks. Our experiment demonstrates that POPO achieves performance comparable to, or even superior to GRPO. Notably, we show that POPO can achieve 36.67% in AIME 2025 with Qwen-Math-7B, outperforming GRPO 30.00%. Our ablation and sweep studies further illustrate the necessity and robustness of POPO components.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.