ArXiv TLDR

Human Agency, Causality, and the Human Computer Interface in High-Stakes Artificial Intelligence

🐦 Tweet
2604.12793

Georges Hattab

cs.HC

TLDR

This paper argues that high-stakes AI erodes human agency and proposes a Causal-Agency Framework to restore human causal control at the interface.

Key contributions

  • Argues high-stakes AI erodes human agency, shifting focus from trust to causal control.
  • Frames "bad AI" as "bad UI," leading to catastrophic interface failures and human error.
  • Critiques current XAI for its correlational focus and inability to represent uncertainty.
  • Proposes a Causal-Agency Framework (CAF) to restore human agency at the AI interface.

Why it matters

This paper highlights a critical, often overlooked, crisis in AI ethics: the erosion of human agency in high-stakes systems. By proposing a Causal-Agency Framework, it offers a novel approach to design AI interfaces that preserve human causal control. This work is crucial for developing safer, more effective, and truly human-centered AI.

Original Abstract

Current discourse on Artificial Intelligence (AI) ethics, dominated by "trustworthy" and "responsible" AI, overlooks a more fundamental human-computer interaction (HCI) crisis: the erosion of human agency. This paper argues that the primary challenge of high-stakes AI systems is not trust, but the preservation of human causal control. We posit that "bad AI" will function as "bad UI," a metaphor for catastrophic interface failures that misrepresent system state and lead to human error. Applying Marshall McLuhan's media theory, AI can be framed as a technology of "augmentation" that simultaneously "amputates" the user's direct perception of causality. This places the interface as the critical locus where a "double uncertainty"--that of the human user and that of the probabilistic model--must be mediated. We critique current Explainable AI (XAI) for its correlational focus and failure to represent uncertainty. We conclude by proposing a rigorous, nested Causal-Agency Framework (CAF) that integrates causal models, uncertainty quantification, and human-centered evaluation to restore agency at the interface.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.