ArXiv TLDR

Beyond Pairs: Your Language Model is Secretly Optimizing a Preference Graph

🐦 Tweet
2605.08037

Ning Liu, Chuanneng Sun, Kristina Klinkner, Shervin Malmasi

cs.LGcs.AI

TLDR

GraphDPO extends DPO by optimizing language models over preference graphs, exploiting richer data structures for improved alignment.

Key contributions

  • Generalizes DPO to optimize over directed acyclic preference graphs, exploiting richer data.
  • Uses a Plackett-Luce-inspired objective, enforcing transitivity and aggregating supervision.
  • Handles discrete/sparse signals via equivalence classes, preventing spurious gradients.
  • Incorporates ground-truth anchoring and maintains linear per-prompt complexity.

Why it matters

This paper addresses limitations of pairwise DPO by introducing GraphDPO, which effectively uses complex preference structures. It offers a more robust and scalable alignment method for language models, leading to superior performance in reasoning and program synthesis.

Original Abstract

Direct Preference Optimization (DPO) aligns language models using pairwise preference comparisons, offering a simple and effective alternative to Reinforcement Learning (RL) from human feedback. However, in many practical settings, training data consists of multiple rollouts per prompt, inducing rich preference structure that pairwise DPO fails to exploit. Collapsing such data into independent pairs discards transitivity, introduces redundant or conflicting supervision, and can lead to unstable optimization. We propose Graph Direct Preference Optimization (GraphDPO), a principled generalization of DPO that operates over directed acyclic preference graphs induced by rollout rankings. GraphDPO encodes dominance relations as edges and optimizes a graph-structured Plackett--Luce-inspired objective that aggregates supervision over graph neighborhoods, enforcing transitivity while recovering standard DPO as a special case. To handle discrete or sparse signals, we introduce an equivalence-class construction where responses with identical preferences form graph layers, and intra-layer edges contribute zero loss, preventing spurious gradients. Despite leveraging full graph structure, GraphDPO maintains linear per-prompt complexity via efficient log-sum-exp aggregation. We further incorporate optional ground-truth anchoring by inserting verified solutions as dominant nodes and applying an annealed schedule that stabilizes early training while gradually relaxing oracle supervision. Experiments on reasoning and program synthesis tasks demonstrate superior performance, suggesting that graph-structured preference modeling is a scalable and robust alternative to pairwise and listwise alignment objectives.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.