ArXiv TLDR

Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization

🐦 Tweet
2310.03708

Zhanhui Zhou, Jie Liu, Jing Shao, Xiangyu Yue, Chao Yang + 2 more

cs.LGcs.AI

TLDR

MODPO is a novel, RL-free method for aligning language models to multiple human preferences simultaneously, achieving stable and efficient optimization across diverse objectives.

Key contributions

  • Introduces Multi-Objective Direct Preference Optimization (MODPO), extending DPO to handle multiple alignment objectives without reinforcement learning.
  • MODPO integrates language modeling and reward modeling into a single training process, yielding implicit collective reward models with weighted objectives.
  • Demonstrates that MODPO matches or outperforms multi-objective RLHF methods while using three times less computational resources and offering greater stability.

Why it matters

This paper addresses the critical challenge of aligning language models to diverse and often conflicting human preferences without the instability and high cost of reinforcement learning. By proposing MODPO, it enables more practical, scalable, and efficient multi-objective alignment, facilitating the deployment of language models better tailored to varied user needs and safety requirements.

Original Abstract

A single language model, even when aligned with labelers through reinforcement learning from human feedback (RLHF), may not suit all human preferences. Recent approaches therefore prefer customization, gathering multi-dimensional feedback, and creating distinct reward models for each dimension. Different language models are then optimized for various preferences using multi-objective RLHF (MORLHF) with varying reward weights. However, RL fine-tuning is unstable and resource-heavy, especially with diverse and usually conflicting objectives. In this paper, we present Multi-Objective Direct Preference Optimization (MODPO), an RL-free extension of Direct Preference Optimization (DPO) for multiple alignment objectives. Essentially, MODPO folds language modeling directly into reward modeling, training language models as implicit collective reward models that combine all objectives with specific weights. MODPO theoretically yields the same optimal solutions as MORLHF but is practically more stable and efficient. Empirical results in safety alignment and long-form question answering show that MODPO matches or outperforms existing methods, producing a Pareto front of language models catering to diverse preferences with three times less computational resources compared to MORLHF. Code is available at https://github.com/ZHZisZZ/modpo.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.