ArXiv TLDR

Smart But Not Moral? Moral Alignment In Human-AI Decision-Making

🐦 Tweet
2604.14371

Christiane Ernst, Luis Gutmann, Domenique Zipperling, Kathrin Figl, Niklas Kühl

cs.HC

TLDR

This paper introduces moral alignment as a critical dimension for human-AI decision-making, especially in high-stakes contexts.

Key contributions

  • Defines moral alignment: congruence between AI values and stakeholder moral intuitions.
  • Argues moral alignment is more fundamental than functional or behavioral alignment.
  • Applies Moral Foundations Theory for a multi-stakeholder view on AI ethics.
  • Emphasizes moral (mis)alignment's impact on AI integration in sensitive areas.

Why it matters

AI-supported decisions in critical areas require more than technical alignment; moral congruence is vital. This paper shifts focus to moral alignment, crucial for ensuring trustworthy and meaningful AI integration in sensitive contexts.

Original Abstract

In high-stakes AI-supported decisions, considerations are not purely technical but involve moral judgments about fairness, responsibility, and harm. While prior research has focused mainly on functional or behavioral alignment, this paper argues that moral alignment may be a more fundamental dimension of human-AI decision-making. Moral alignment is defined as the perceived congruence between the values embedded in an AI system's decision logic and the moral intuitions of stakeholders. Building on Moral Foundations Theory, the paper adopts a multi-stakeholder perspective and highlights why moral (mis)alignment matters for the meaningful integration of AI in sensitive contexts.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.