Reward Hacking in Rubric-Based Reinforcement Learning
Anas Mahmoud, MohammadHossein Rezaei, Zihao Wang, Anisha Gunjal, Bing Liu + 1 more
TLDR
This paper investigates reward hacking in rubric-based RL, finding that even strong verifiers don't prevent issues if rubrics are flawed, leading to quality declines.
Key contributions
- Proposes a framework to distinguish verifier failure from rubric-design limitations in RL.
- Shows weak verifiers cause proxy-reward gains that don't transfer, indicating exploitation.
- Introduces "self-internalization gap" to track reference-verifier quality without a verifier.
- Finds stronger verification reduces but doesn't eliminate reward hacking with flawed rubrics.
Why it matters
This paper is crucial for understanding reward hacking in open-ended RL, revealing that even strong verifiers can fail if rubrics are incomplete. It offers insights into designing more robust reward systems and highlights the need for better rubric design to ensure true quality gains.
Original Abstract
Reinforcement learning with verifiable rewards has enabled strong post-training gains in domains such as math and coding, though many open-ended settings rely on rubric-based rewards. We study reward hacking in rubric-based RL, where a policy is optimized against a training verifier but evaluated against a cross-family panel of three frontier judges, reducing dependence on any single evaluator. Our framework separates two sources of divergence: verifier failure, where the training verifier credits rubric criteria that reference verifiers reject, and rubric-design limitations, where even strong rubric-based verifiers favor responses that rubric-free judges rate worse overall. Across medical and science domains, weak verifiers produce large proxy-reward gains that do not transfer to the reference verifiers; exploitation grows over training and concentrates in recurring failures such as partial satisfaction of compound criteria, treating implicit content as explicit, and imprecise topical matching. Stronger verifiers substantially reduce, but do not eliminate, verifier exploitation. We also introduce a self-internalization gap, a verifier-free diagnostic based on policy log-probabilities, which tracks reference-verifier quality, detecting when the policy trained using the weak verifier stops improving. Finally, in our setting, stronger verification does not prevent reward hacking when the rubric leaves important failure modes unspecified: rubric-based verifiers prefer the RL checkpoint, while rubric-free judges prefer the base model. These disagreements coincide with gains concentrated in completeness and presence-based criteria, alongside declines in factual correctness, conciseness, relevance, and overall quality. Together, these results suggest that stronger verification reduces reward hacking, but does not by itself ensure that rubric gains correspond to broader quality gains.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.