ArXiv TLDR

MARBLE: Multi-Aspect Reward Balance for Diffusion RL

🐦 Tweet
2605.06507

Canyu Zhao, Hao Chen, Yunze Tong, Yu Qiao, Jiacheng Li + 1 more

cs.CVcs.LG

TLDR

MARBLE introduces a gradient-space optimization framework to balance multiple rewards for diffusion RL, improving all dimensions simultaneously without manual weighting.

Key contributions

  • Proposes MARBLE, a gradient-space framework for multi-aspect reward balancing in diffusion RL.
  • Uses independent advantage estimators and per-reward gradients, harmonized via Quadratic Programming.
  • Achieves simultaneous improvement across all reward dimensions on SD3.5 Medium, fixing negative gradients.
  • Includes an amortized formulation and EMA smoothing for efficiency and stability, running at near baseline speed.

Why it matters

Current diffusion RL struggles to optimize multiple image criteria simultaneously without extensive manual tuning. MARBLE offers an efficient, gradient-space solution that intelligently balances per-reward gradients. This leads to unified models that improve all aspects of image generation, significantly advancing alignment with complex human preferences.

Original Abstract

Reinforcement learning fine-tuning has become the dominant approach for aligning diffusion models with human preferences. However, assessing images is intrinsically a multi-dimensional task, and multiple evaluation criteria need to be optimized simultaneously. Existing practice deal with multiple rewards by training one specialist model per reward, optimizing a weighted-sum reward $R(x)=\sum_k w_k R_k(x)$, or sequentially fine-tuning with a hand-crafted stage schedule. These approaches either fail to produce a unified model that can be jointly trained on all rewards or necessitates heavy manually tuned sequential training. We find that the failure stems from using a naive weighted-sum reward aggregation. This approach suffers from a sample-level mismatch because most rollouts are specialist samples, highly informative for certain reward dimensions but irrelevant for others; consequently, weighted summation dilutes their supervision. To address this issue, we propose MARBLE (Multi-Aspect Reward BaLancE), a gradient-space optimization framework that maintains independent advantage estimators for each reward, computes per-reward policy gradients, and harmonizes them into a single update direction without manually-tuned reward weighting, by solving a Quadratic Programming problem. We further propose an amortized formulation that exploits the affine structure of the loss used in DiffusionNFT, to reduce the per-step cost from K+1 backward passes to near single-reward baseline cost, together with EMA smoothing on the balancing coefficients to stabilize updates against transient single-batch fluctuations. On SD3.5 Medium with five rewards, MARBLE improves all five reward dimensions simultaneously, turns the worst-aligned reward's gradient cosine from negative under weighted summation in 80% of mini-batches to consistently positive, and runs at 0.97X the training speed of baseline training.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.