Position: agentic AI orchestration should be Bayes-consistent
Theodore Papamarkou, Pierre Alquier, Matthias Bauer, Wray Buntine, Andrew Davison + 25 more
TLDR
This paper argues for applying Bayesian principles to the orchestration layer of agentic AI systems to improve decision-making under uncertainty.
Key contributions
- Argues for Bayesian principles at the agentic AI orchestration layer for decisions under uncertainty.
- Proposes Bayesian decision theory to maintain beliefs, update from interactions, and choose coherent actions.
- Focuses Bayesian principles on the orchestration layer, not necessarily the LLM agent parameters.
- Offers practical properties, concrete examples, and design patterns for Bayesian control in agentic AI.
Why it matters
This paper addresses a critical challenge in deploying LLMs: making robust decisions under uncertainty. By advocating for Bayesian principles at the orchestration level, it offers a practical path to more coherent and effective agentic AI systems, enhancing human-AI collaboration and high-value deployments.
Original Abstract
LLMs excel at predictive tasks and complex reasoning tasks, but many high-value deployments rely on decisions under uncertainty, for example, which tool to call, which expert to consult, or how many resources to invest. While the usefulness and feasibility of Bayesian approaches remain unclear for LLM inference, this position paper argues that the control layer of an agentic AI system (that orchestrates LLMs and tools) is a clear case where Bayesian principles should shine. Bayesian decision theory provides a framework for agentic systems that can help to maintain beliefs over task-relevant latent quantities, to update these beliefs from observed agentic and human-AI interactions, and to choose actions. Making LLMs themselves explicitly Bayesian belief-updating engines remains computationally intensive and conceptually nontrivial as a general modeling target. In contrast, this paper argues that coherent decision-making requires Bayesian principles at the orchestration level of the agentic system, not necessarily the LLM agent parameters. This paper articulates practical properties for Bayesian control that fit modern agentic AI systems and human-AI collaboration, and provides concrete examples and design patterns to illustrate how calibrated beliefs and utility-aware policies can improve agentic AI orchestration.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.