ArXiv TLDR

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

🐦 Tweet
2204.05862

Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen + 26 more

cs.CLcs.LG

TLDR

This paper demonstrates that reinforcement learning from human feedback (RLHF) can effectively fine-tune language models to be both helpful and harmless, improving performance across NLP tasks while maintaining specialized skills.

Key contributions

  • Applied RLHF to align language models for helpfulness and harmlessness, boosting performance on diverse NLP benchmarks.
  • Developed an iterated online training approach updating preference models and policies weekly with fresh human feedback.
  • Analyzed robustness of RLHF training, revealing a linear relationship between reward and policy divergence, and conducted peripheral studies on calibration and out-of-distribution detection.

Why it matters

Aligning language models to be helpful and harmless is critical for safe and effective AI deployment. This work advances the state of the art by showing that RLHF not only improves general NLP capabilities but also integrates well with specialized skills, offering a scalable, iterative training framework informed by human preferences. The insights into training dynamics and robustness further guide future development of aligned AI systems.

Original Abstract

We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.