From CRUD to Autonomous Agents: Formal Validation and Zero-Trust Security for Semantic Gateways in AI-Native Enterprise Systems
TLDR
This paper introduces a Semantic Gateway with a Model Context Protocol to provide formal validation and zero-trust security for AI-native enterprise systems.
Key contributions
- Proposes a Semantic Gateway governed by the Model Context Protocol (MCP) for AI-native enterprise systems.
- Validates autonomous agents as stochastic state-transition systems using enabled-tool graphs and fuzzing.
- Introduces a three-layer Zero-Trust security model for agentic deployments, including a Semantic Firewall.
- Adapts semantic fuzzing and Enabledness-Preserving Abstractions (EPAs) to audit agent behavior.
Why it matters
This paper addresses critical security challenges introduced by probabilistic LLMs in AI-native enterprise systems. It provides a robust framework for formally validating and securing autonomous agents, ensuring auditable and trustworthy deployments. This is crucial for the safe transition to agent-driven architectures.
Original Abstract
Enterprise software engineering is shifting away from deterministic CRUD/REST architectures toward AI-native systems where large language models act as cognitive orchestrators. This transition introduces a critical security tension: probabilistic LLMs weaken classical mechanisms for validation, access control, and formal testing. This paper proposes the design, formal validation, and empirical evaluation of a Semantic Gateway governed by the Model Context Protocol (MCP). The gateway reframes the enterprise API as a semantic surface where tools are dynamically discovered, authorized, and executed based on intent and policy enforcement. The central contribution rests on a paradigm shift: autonomous agents must not be validated as traditional software nor as simple API consumers, but as stochastic state-transition systems whose behavior must be abstracted, fuzzed, and audited through enabled-tool graphs. The architecture introduces a three-layer Zero-Trust security model comprising a pre-inference Semantic Firewall, deterministic Tool-Level RBAC, and out-of-band Cryptographic Human-in-the-Loop approval. Enabledness-Preserving Abstractions (EPAs) and greybox semantic fuzzing--originally developed for blockchain smart contract verification--are adapted to audit agent behavior in enterprise environments. Results demonstrate an 84.2% reduction in incidental code. Across 500,000 multi-turn fuzzing sequences, the methodology achieved a 100% discovery rate of hidden unauthorized state transitions, proving that dynamic formal verification is strictly necessary for secure agentic deployment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.