Accelerating RL Post-Training Rollouts via System-Integrated Speculative Decoding
Hayate Iso, Tiyasa Mitra, Sudipta Mondal, Rasoul Shafipour, Venmugil Elango + 13 more
TLDR
This paper accelerates RL post-training rollouts for large language models by integrating speculative decoding, achieving up to 2.5x speedup while preserving output distribution.
Key contributions
- Introduces system-integrated speculative decoding for lossless RL rollout acceleration.
- Implements speculative decoding within NeMo-RL using a vLLM backend.
- Achieves 1.8x rollout throughput improvement for synchronous RL at 8B scale.
- Projects up to 2.5x end-to-end training speedup for asynchronous RL at 235B scale.
Why it matters
RL post-training is a major bottleneck for large language models. This work offers a significant, lossless acceleration method that can be integrated directly into existing RL training pipelines. It promises substantial speedups for developing frontier models.
Original Abstract
RL post-training of frontier language models is increasingly bottlenecked by autoregressive rollout generation, making rollout acceleration a central systems challenge. Many existing efficiency methods improve throughput by changing the rollout or optimization regime, for example, through off-policy execution, replay, or lower-precision generation. We study speculative decoding as a lossless acceleration primitive for RL rollouts that preserves the target model's output distribution. We implement speculative decoding in NeMo-RL with a vLLM backend, supporting both synchronous and asynchronous pipelines and enabling speculation during RL rollouts. This benefit is realizable across speculation mechanisms, such as pretrained MTP heads, small external draft models or even techniques such as Eagle3, which are traditionally applied after RL phase. This yields a deployment path for state-of-the-art speculative decoding inside RL training. In a reasoning post-training workload at 8B scale under synchronous RL, speculative decoding improves rollout throughput by 1.8x. Using a high-fidelity performance simulator, we project that combining speculative decoding with asynchronous RL yields up to 2.5x end-to-end training speedup at 235B scale.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.