ArXiv TLDR

Bridging SFT and RL: Dynamic Policy Optimization for Robust Reasoning

🐦 Tweet
2604.08926

Taojie Zhu, Dongyang Xu, Ding Zou, Sen Zhao, Qiaobo Hao + 2 more

cs.LG

TLDR

DYPO unifies SFT and RL to enhance LLM reasoning by dynamically balancing stability and exploration, reducing bias and variance.

Key contributions

  • Introduces DYPO, a unified framework for SFT and RL to mitigate their statistical conflicts.
  • Uses Group Alignment Loss (GAL) to significantly reduce RL gradient variance.
  • Employs Multi-Teacher Distillation to correct SFT fitting bias via diverse paths.
  • Features Dynamic Gating for adaptive balance between SFT stability and RL exploration.

Why it matters

This paper offers a principled solution to the dilemma between SFT's stability and RL's exploration. DYPO's dynamic approach significantly improves LLM reasoning, especially on complex and out-of-distribution tasks, advancing post-training paradigms.

Original Abstract

Post-training paradigms for Large Language Models (LLMs), primarily Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), face a fundamental dilemma: SFT provides stability (low variance) but suffers from high fitting bias, while RL enables exploration (low bias) but grapples with high gradient variance. Existing unified optimization strategies often employ naive loss weighting, overlooking the statistical conflict between these distinct gradient signals. In this paper, we provide a rigorous theoretical analysis of this bias-variance trade-off and propose \textbf{DYPO} (Dynamic Policy Optimization), a unified framework designed to structurally mitigate this conflict. DYPO integrates three core components: (1) a \textit{Group Alignment Loss (GAL)} that leverages intrinsic group dynamics to significantly reduce RL gradient variance; (2) a \textit{Multi-Teacher Distillation} mechanism that corrects SFT fitting bias via diverse reasoning paths; and (3) a \textit{Dynamic Exploitation-Exploration Gating} mechanism that adaptively arbitrates between stable SFT and exploratory RL based on reward feedback. Theoretical analysis confirms that DYPO linearly reduces fitting bias and minimizes overall variance. Extensive experiments demonstrate that DYPO significantly outperforms traditional sequential pipelines, achieving an average improvement of 4.8\% on complex reasoning benchmarks and 13.3\% on out-of-distribution tasks. Our code is publicly available at https://github.com/Tocci-Zhu/DYPO.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.