ArXiv TLDR

Parallax: Why AI Agents That Think Must Never Act

🐦 Tweet
2604.12986

Joel Fokou

cs.CRcs.AI

TLDR

Parallax introduces an architectural safety paradigm for AI agents with execution capabilities, proving prompt-based guardrails are insufficient.

Key contributions

  • Parallax provides an architectural safety paradigm for AI agents, separating reasoning from execution.
  • Uses Adversarial Validation, Information Flow Control, and Reversible Execution for robust threat mitigation.
  • Evaluated with Assume-Compromise Evaluation, blocking 98.9-100% of attacks with zero false positives.

Why it matters

As AI agents gain real-world execution abilities, their security is paramount. This paper demonstrates that current prompt-based safety measures are fundamentally inadequate. Parallax offers a robust, architectural solution that ensures safety even when the agent's reasoning system is compromised, critical for enterprise adoption.

Original Abstract

Autonomous AI agents are rapidly transitioning from experimental tools to operational infrastructure, with projections that 80% of enterprise applications will embed AI copilots by the end of 2026. As agents gain the ability to execute real-world actions (reading files, running commands, making network requests, modifying databases), a fundamental security gap has emerged. The dominant approach to agent safety relies on prompt-level guardrails: natural language instructions that operate at the same abstraction level as the threats they attempt to mitigate. This paper argues that prompt-based safety is architecturally insufficient for agents with execution capability and introduces Parallax, a paradigm for safe autonomous AI execution grounded in four principles: Cognitive-Executive Separation, which structurally prevents the reasoning system from executing actions; Adversarial Validation with Graduated Determinism, which interposes an independent, multi-tiered validator between reasoning and execution; Information Flow Control, which propagates data sensitivity labels through agent workflows to detect context-dependent threats; and Reversible Execution, which captures pre-destructive state to enable rollback when validation fails. We present OpenParallax, an open-source reference implementation in Go, and evaluate it using Assume-Compromise Evaluation, a methodology that bypasses the reasoning system entirely to test the architectural boundary under full agent compromise. Across 280 adversarial test cases in nine attack categories, Parallax blocks 98.9% of attacks with zero false positives under its default configuration, and 100% of attacks under its maximum-security configuration. When the reasoning system is compromised, prompt-level guardrails provide zero protection because they exist only within the compromised system; Parallax's architectural boundary holds regardless.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.