ArXiv TLDR

ParetoSlider: Diffusion Models Post-Training for Continuous Reward Control

🐦 Tweet
2604.20816

Shelly Golan, Michael Finkelson, Ariel Bereslavsky, Yotam Nitzan, Or Patashnik

cs.LGcs.CV

TLDR

ParetoSlider trains a single diffusion model using multi-objective RL to provide continuous, inference-time control over conflicting rewards.

Key contributions

  • Trains a single diffusion model to approximate the entire Pareto front for multiple objectives.
  • Uses continuous preference weights as a conditioning signal for inference-time control.
  • Eliminates the need for retraining or multiple checkpoints for different trade-offs.
  • Matches or exceeds performance of baselines trained for fixed reward trade-offs.

Why it matters

Current generative models struggle with multiple conflicting objectives, fixing trade-offs at training time. ParetoSlider solves this by enabling continuous, inference-time control over reward preferences. This empowers users to dynamically balance competing goals like prompt adherence and source fidelity.

Original Abstract

Reinforcement Learning (RL) post-training has become the standard for aligning generative models with human preferences, yet most methods rely on a single scalar reward. When multiple criteria matter, the prevailing practice of ``early scalarization'' collapses rewards into a fixed weighted sum. This commits the model to a single trade-off point at training time, providing no inference-time control over inherently conflicting goals -- such as prompt adherence versus source fidelity in image editing. We introduce ParetoSlider, a multi-objective RL (MORL) framework that trains a single diffusion model to approximate the entire Pareto front. By training the model with continuously varying preference weights as a conditioning signal, we enable users to navigate optimal trade-offs at inference time without retraining or maintaining multiple checkpoints. We evaluate ParetoSlider across three state-of-the-art flow-matching backbones: SD3.5, FluxKontext, and LTX-2. Our single preference-conditioned model matches or exceeds the performance of baselines trained separately for fixed reward trade-offs, while uniquely providing fine-grained control over competing generative goals.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.