ArXiv TLDR

Multi-Objective and Mixed-Reward Reinforcement Learning via Reward-Decorrelated Policy Optimization

🐦 Tweet
2605.13641

Yang Bai, Kaiyuan Liu, Ziyuan Zhuang, Jiahong Zhou, Rongxiang Weng + 3 more

cs.LGcs.CL

TLDR

RDPO improves multi-objective and mixed-reward RL by decorrelating rewards and stabilizing advantage allocation for diverse reward types.

Key contributions

  • Introduces Reward-Decorrelated Policy Optimization (RDPO) for complex multi-objective RL.
  • Stabilizes advantage allocation using Magnitude-Aware Quantile normalization for varied reward types.
  • Mitigates reward correlation redundancy via Mahalanobis whitening within active reward subspaces.
  • Improves LLM instruction following, writing quality, and robustness on hard prompts post-training.

Why it matters

Complex multi-task and mixed-reward RL environments often suffer from unstable advantage estimation due to heterogeneous and correlated rewards. RDPO offers a novel solution by explicitly addressing these issues through reward decorrelation and stabilization. This leads to improved performance in critical applications like LLM instruction following and robustness.

Original Abstract

Complex reinforcement learning environments frequently employ multi-task and mixed-reward formulations. In these settings, heterogeneous reward distributions and correlated reward dimensions often destabilize the construction of scalar advantages. To address these challenges, we propose Reward-Decorrelated Policy Optimization (RDPO), a reward-processing method designed to explicitly target both failure modes. RDPO first utilizes Magnitude-Aware Quantile normalization to stabilize prompt-level advantage allocation across binary, fractional, and continuous rewards. It then applies Mahalanobis whitening within each active reward subspace to mitigate correlation redundancy prior to aggregation. When applied during the post-training of LongCat-Flash, RDPO enhances instruction following, writing quality, and robustness to hard prompts while remaining broadly competitive on reasoning and coding evaluations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.