PrefMoE: Robust Preference Modeling with Mixture-of-Experts Reward Learning
Ziqin Yuan, Ruiqi Wang, Dezhong Zhao, Baijian Yang, Byung-Cheol Min
TLDR
PrefMoE uses a mixture-of-experts to robustly learn rewards from noisy preference data, improving policy learning in RL.
Key contributions
- Introduces PrefMoE, a mixture-of-experts framework for robust preference-based reward learning.
- Learns multiple specialized reward experts to capture diverse, noisy preference patterns.
- Uses trajectory-level soft routing to adaptively combine experts for improved robustness.
- Employs a load-balancing regularizer to stabilize training and prevent expert collapse.
Why it matters
Preference-based RL struggles with noisy, heterogeneous data from human feedback. PrefMoE offers a robust solution by learning specialized reward models, leading to more reliable policies. This advances the practical application of PbRL in real-world scenarios.
Original Abstract
Preference-based reinforcement learning offers a scalable alternative to manual reward engineering by learning reward structures from comparative feedback. However, large-scale preference datasets, whether collected from crowdsourced annotators or generated by synthetic teachers, often contain heterogeneous and partially conflicting supervision, including disagreement across annotators and inconsistency within annotators. Existing reward learning methods typically fit a single reward model to such data, forcing it to average incompatible signals and thereby limiting robustness. To solve this, we propose PrefMoE, a mixture-of-experts reward learning framework for robust preference modeling. PrefMoE learns multiple specialized reward experts and uses trajectory-level soft routing to combine them adaptively, enabling the model to capture diverse latent preference patterns under noisy and heterogeneous preference supervision. A load-balancing regularizer further stabilizes training by preventing expert collapse. Across locomotion benchmarks from D4RL and manipulation tasks from MetaWorld, PrefMoE improves preference prediction robustness and leads to more reliable downstream policy learning than strong single-model baselines.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.