ArXiv TLDR

Lightning OPD: Efficient Post-Training for Large Reasoning Models with Offline On-Policy Distillation

🐦 Tweet
2604.13010

Yecheng Wu, Song Han, Hai Cai

cs.LGcs.AI

TLDR

Lightning OPD introduces an efficient offline on-policy distillation method for LLMs, enforcing teacher consistency to eliminate live teacher servers and speed up training.

Key contributions

  • Identifies "teacher consistency" as a critical, previously overlooked condition for on-policy distillation.
  • Proposes Lightning OPD, an offline framework that precomputes teacher log-probabilities to enforce consistency.
  • Eliminates the need for a live teacher inference server, significantly reducing infrastructure overhead.
  • Achieves 4.0x speedup and state-of-the-art performance on reasoning tasks like AIME 2024.

Why it matters

This paper significantly lowers the barrier to entry for LLM post-training research by making on-policy distillation more accessible and cost-effective. By eliminating the need for a live teacher server, Lightning OPD democratizes advanced training techniques and enables faster iterative research.

Original Abstract

On-policy distillation (OPD) has emerged as an efficient post-training paradigm for large language models. However, standard OPD requires a live teacher inference server throughout training, resulting in substantial infrastructure overhead. In this work, we investigate whether on-policy distillation can be performed offline. A natural approach is to precompute teacher log-probabilities once over SFT rollouts and reuse them during training. In practice, however, this offline variant fails to reliably match the performance of standard OPD. To understand this discrepancy, we identify a previously overlooked condition that is critical for any OPD pipeline, which we term teacher consistency. This condition requires that the same teacher model be used for both supervised fine-tuning and OPD. We show that violating teacher consistency introduces an irreducible gradient bias, causing both offline and online OPD to converge to a suboptimal fixed point regardless of training duration. Building on this insight, we propose Lightning OPD, an offline on-policy distillation framework that enforces teacher consistency by precomputing teacher log-probabilities over SFT rollouts. This design eliminates the need for a live teacher server entirely. We further show that, under teacher consistency, Lightning OPD shares the same optimum as standard OPD, with bounded gradient discrepancy and an implicit regularization effect that helps prevent policy drift. Extensive experiments on mathematical reasoning and code generation demonstrate that Lightning OPD achieves state-of-the-art performance with significantly improved efficiency. Starting from an SFT-initialized Qwen3-8B-Base model, Lightning OPD reaches 69.9% on AIME 2024 in just 30 GPU hours, achieving a 4.0x speedup over standard OPD and substantially lowering the barrier to entry for academic research on LLM post-training.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.