The Alignment Target Problem: Divergent Moral Judgments of Humans, AI Systems, and Their Designers
Benjamin Minhao Chen, Xinyu Xie
TLDR
This paper reveals that moral judgments for AI, humans, and their designers diverge, particularly when human design of AI is explicit, posing an "alignment target problem."
Key contributions
- Experimental study found no difference in moral judgments between a human repairman and a repair robot.
- Moral judgments shifted substantially when robot actions were attributed to human design.
- Making human design visible activated heightened deontological reasoning and moral constraints.
- This divergence in moral standards for AI, humans, and designers is termed the "alignment target problem."
Why it matters
This paper reveals a critical "alignment target problem" where people apply inconsistent moral standards to AI, humans, and designers. This complicates the development of ethical AI. Understanding these divergent judgments is crucial for establishing coherent AI governance frameworks in high-stakes domains.
Original Abstract
The quest to align machine behavior with human values raises fundamental questions about the moral frameworks that should govern AI decision-making. Much alignment research assumes that the appropriate benchmark is how humans themselves would act in a given situation. Research into agent-type value forks has challenged this assumption by showing that people do not always hold AI systems to the same moral standards as humans. Yet this challenge is subject to two further questions: whether people evaluate AI behavior differently when its human origins are made visible, and whether people hold the humans who program AI systems to different moral standards than either the humans or the machines under evaluation. An experimental study on 1,002 U.S. adults measured moral judgments in a runaway mine train scenario, varying the subject of evaluation across four conditions: a repairman, a repair robot, a repair robot programmed by company engineers, and company engineers programming a repair robot. We find no significant variation in the moral standards applied to the repairman and the robot. However, moral judgments shifted substantially when robot actions were described as the product of human design. Participants exhibited markedly more deontological reasoning when evaluating the robot programmed by engineers or the engineers programming it, suggesting that making human design visible activates heightened moral constraints. These findings provide evidence that people apply meaningfully different moral standards to AI systems, to humans acting in the same situation, and to the humans who design them. We call this divergence the alignment target problem. Whether these plural normative standards can be reconciled into a coherent framework for AI governance in high-stakes domains remains an open question.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.