ArXiv TLDR

Agentic Explainability at Scale: Between Corporate Fears and XAI Needs

🐦 Tweet
2604.14984

Yomna Elsayed, Cecily Jones

cs.HCcs.AI

TLDR

This paper addresses corporate fears around "Agent Sprawl" in agentic AI adoption by proposing explainability techniques and an Agentic AI Card.

Key contributions

  • Identifies enterprise AI governance concerns regarding agentic AI autonomy and "Agent Sprawl."
  • Proposes design-time and runtime explainability techniques to mitigate agentic AI risks.
  • Introduces a prototype "Agentic AI Card" to facilitate secure, scalable agent deployment.

Why it matters

As agentic AI adoption accelerates, "Agent Sprawl" poses significant governance challenges for enterprises. This paper provides crucial insights into corporate fears, offering practical explainability techniques and a novel "Agentic AI Card" to enable safer, more transparent deployment at scale.

Original Abstract

As companies enter the race for agentic AI adoption, fears surface around agentic autonomy and its subsequent risks. These fears compound as companies scale their agentic AI adoption with low-code applications, without a comparable scaling in their governance processes and expertise resulting in a phenomenon known as "Agent Sprawl". While shadow AI tools can help with agentic discovery and identification, few observability tools offer insights into the agents' configuration and settings or the decision-making process during agent-to-agent communication and orchestration. This paper explores AI governance professionals' concerns in enterprise settings, while offering design-time and runtime explainability techniques as suggested by AI governance experts for addressing those fears. Finally, we provide a preliminary prototype of an Agentic AI Card that can help companies feel at ease deploying agents at scale.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.