Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion + 46 more
TLDR
Constitutional AI trains harmless AI assistants using AI-generated feedback guided by a set of human-defined principles, minimizing the need for human-labeled data.
Key contributions
- Introduces Constitutional AI, a method for training AI assistants to be harmless without direct human-labeled harmfulness data.
- Combines supervised learning with AI-generated critiques and revisions, followed by reinforcement learning from AI feedback (RLAIF).
- Enables AI to handle harmful queries by explaining objections rather than evasively avoiding them, improving transparency and control.
Why it matters
This paper presents a scalable approach to aligning AI behavior with human values by leveraging AI's own feedback under a constitutional framework, significantly reducing reliance on costly human annotations. This advances the development of safer, more interpretable AI systems capable of nuanced responses to harmful content, which is critical as AI capabilities continue to grow.
Original Abstract
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.