Efficient Preference Poisoning Attack on Offline RLHF
Chenye Yang, Weiyu Xu, Lifeng Lai
TLDR
This paper introduces two efficient poisoning attacks (BAL-A, BMP-A) against offline RLHF models like DPO by flipping preference labels.
Key contributions
- Shows that flipping one preference label induces a parameter-independent shift in DPO gradients.
- Converts the targeted poisoning problem into a structured binary sparse approximation problem.
- Develops two novel attack methods: Binary-Aware Lattice Attack (BAL-A) and Binary Matching Pursuit Attack (BMP-A).
- Provides theoretical guarantees and validates attacks on synthetic and Stanford Human Preferences datasets.
Why it matters
Offline RLHF models are vulnerable to data poisoning, which can compromise their safety and fairness. This paper presents practical and theoretically grounded attacks, highlighting critical vulnerabilities in DPO. Understanding these attacks is crucial for developing robust and secure RLHF systems.
Original Abstract
Offline Reinforcement Learning from Human Feedback (RLHF) pipelines such as Direct Preference Optimization (DPO) train on a pre-collected preference dataset, which makes them vulnerable to preference poisoning attack. We study label flip attacks against log-linear DPO. We first illustrate that flipping one preference label induces a parameter-independent shift in the DPO gradient. Using this key property, we can then convert the targeted poisoning problem into a structured binary sparse approximation problem. To solve this problem, we develop two attack methods: Binary-Aware Lattice Attack (BAL-A) and Binary Matching Pursuit Attack (BMP-A). BAL-A embeds the binary flip selection problem into a binary-aware lattice and applies Lenstra-Lenstra-Lovász reduction and Babai's nearest plane algorithm; we provide sufficient conditions that enforce binary coefficients and recover the minimum-flip objective. BMP-A adapts binary matching pursuit to our non-normalized gradient dictionary and yields coherence-based recovery guarantees and robustness (impossibility) certificates for $K$-flip budgets. Experiments on synthetic dictionaries and the Stanford Human Preferences dataset validate the theory and highlight how dictionary geometry governs attack success.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.