ArXiv TLDR

Synthetic Data for any Differentiable Target

🐦 Tweet
2604.08423

Tristan Thrush, Sung Min Park, Herman Brunborg, Luke Bailey, Marcel Roed + 3 more

cs.CLcs.AIcs.LGstat.ML

TLDR

DPG uses RL and higher-order gradients to optimize synthetic data generators, precisely controlling target model behavior via supervised fine-tuning.

Key contributions

  • Introduces Dataset Policy Gradient (DPG) for optimizing synthetic data generators.
  • Uses exact data attribution via higher-order gradients as policy gradient rewards.
  • Precisely controls target model's LM head weights (e.g., embed QR codes, specific patterns).
  • Enables generators to rephrase inputs in new languages or produce specific UUIDs without explicit prompts.

Why it matters

This paper presents DPG, a powerful and flexible technique for shaping language model properties using only synthetic training data. It demonstrates unprecedented control over model behavior, even for complex, non-obvious objectives. This opens new avenues for fine-tuning and steering AI models effectively.

Original Abstract

What are the limits of controlling language models via synthetic training data? We develop a reinforcement learning (RL) primitive, the Dataset Policy Gradient (DPG), which can precisely optimize synthetic data generators to produce a dataset of targeted examples. When used for supervised fine-tuning (SFT) of a target model, these examples cause the target model to do well on a differentiable metric of our choice. Our approach achieves this by taking exact data attribution via higher-order gradients and using those scores as policy gradient rewards. We prove that this procedure closely approximates the true, intractable gradient for the synthetic data generator. To illustrate the potential of DPG, we show that, using only SFT on generated examples, we can cause the target model's LM head weights to (1) embed a QR code, (2) embed the pattern $\texttt{67}$, and (3) have lower $\ell^2$ norm. We additionally show that we can cause the generator to (4) rephrase inputs in a new language and (5) produce a specific UUID, even though neither of these objectives is conveyed in the generator's input prompts. These findings suggest that DPG is a powerful and flexible technique for shaping model properties using only synthetic training examples.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.