OGER: A Robust Offline-Guided Exploration Reward for Hybrid Reinforcement Learning
Xinyu Ma, Mingzhou Xu, Xuebo Liu, Chang Jin, Qiang Wang + 2 more
TLDR
OGER is a new framework that enhances LLM exploration in RLVR by unifying offline guidance and online RL with an entropy-aware reward.
Key contributions
- Unifies offline teacher guidance and online RL through a specialized reward model for LLM exploration.
- Employs multi-teacher collaborative training to construct an auxiliary exploration reward.
- Leverages offline trajectories and the model's own entropy to incentivize autonomous exploration.
- Significantly outperforms baselines in mathematical and general reasoning benchmarks.
Why it matters
LLMs in RLVR often struggle to explore novel reasoning paths. OGER integrates offline guidance with online learning using a novel entropy-aware reward. This significantly improves reasoning tasks and generalization.
Original Abstract
Recent advancements in Reinforcement Learning with Verifiable Rewards (RLVR) have significantly improved Large Language Model (LLM) reasoning, yet models often struggle to explore novel trajectories beyond their initial latent space. While offline teacher guidance and entropy-driven strategies have been proposed to address this, they often lack deep integration or are constrained by the model's inherent capacity. In this paper, we propose OGER, a novel framework that unifies offline teacher guidance and online reinforcement learning through a specialized reward modeling lens. OGER employs multi-teacher collaborative training and constructs an auxiliary exploration reward that leverages both offline trajectories and the model's own entropy to incentivize autonomous exploration. Extensive experiments across mathematical and general reasoning benchmarks demonstrate that OGER significantly outperforms competitive baselines, achieving substantial gains in mathematical reasoning while maintaining robust generalization to out-of-domain tasks. We provide a comprehensive analysis of training dynamics and conduct detailed ablation studies to validate the effectiveness of our entropy-aware reward modulation. Our code is available at https://github.com/ecoli-hit/OGER.git.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.