ArXiv TLDR

DynamicPO: Dynamic Preference Optimization for Recommendation

🐦 Tweet
2605.00327

Xingyu Hu, Kai Zhang, Jiancan Wu, Shuli Wang, Chi Wang + 5 more

cs.IRcs.AI

TLDR

DynamicPO prevents preference optimization collapse in LLM-based recommendation by adaptively selecting boundary negatives and adjusting optimization strength.

Key contributions

  • Identifies and explains "preference optimization collapse" in DPO for LLM-based recommendation systems.
  • Proposes DynamicPO, a framework with Dynamic Boundary Negative Selection to prioritize informative negatives.
  • Introduces Dual-Margin Dynamic beta Adjustment to calibrate optimization strength based on boundary ambiguity.
  • Demonstrates improved recommendation accuracy and collapse prevention with negligible overhead across datasets.

Why it matters

This paper addresses a critical issue, "preference optimization collapse," in LLM-based recommendation systems using DPO. By proposing DynamicPO, it offers a practical and effective solution to improve recommendation accuracy. This work is crucial for developing more robust and efficient preference optimization methods.

Original Abstract

In large language model (LLM)-based recommendation systems, direct preference optimization (DPO) effectively aligns recommendations with user preferences, requiring multi-negative objective functions to leverage abundant implicit-feedback negatives and sharpen preference boundaries. However, our empirical analyses reveal a counterintuitive phenomenon, preference optimization collapse, where increasing the number of negative samples can lead to performance degradation despite a continuously decreasing training loss. We further theoretically demonstrate that this collapse arises from gradient suppression, caused by the dominance of easily discriminable negatives over boundary-critical negatives that truly define user preference boundaries. As a result, boundary-relevant signals are under-optimized, weakening the model's decision boundary. Motivated by these observations, we propose DynamicPO (Dynamic Preference Optimization), a lightweight and plug-and-play framework comprising two adaptive mechanisms: Dynamic Boundary Negative Selection, which identifies and prioritizes informative negatives near the model's decision boundary, and Dual-Margin Dynamic beta Adjustment, which calibrates optimization strength per sample according to boundary ambiguity. Extensive experiments on three public datasets show that DynamicPO effectively prevents optimization collapse and improves recommendation accuracy on multi-negative preference optimization methods, with negligible computational overhead. Our code and datasets are available at https://github.com/xingyuHuxingyu/DynamicPO.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.