ArXiv TLDR

ARGUS: Defending LLM Agents Against Context-Aware Prompt Injection

🐦 Tweet
2605.03378

Shihao Weng, Yang Feng, Jinrui Zhang, Xiaofei Xie, Jiongchi Yu + 1 more

cs.CRcs.SE

TLDR

ARGUS defends LLM agents against context-aware prompt injection by auditing decisions based on provenance, significantly reducing attack success.

Key contributions

  • Identifies limitations of existing prompt injection defenses against context-aware attacks in LLM agents.
  • Introduces AgentLure, a benchmark for context-dependent tasks and context-aware prompt injection attacks.
  • Proposes ARGUS, a provenance-aware decision auditing defense for LLM agents.
  • ARGUS reduces attack success to 3.8% and preserves 87.5% task utility, outperforming prior defenses.

Why it matters

This paper addresses a critical gap in LLM agent security by tackling context-aware prompt injection, a more realistic threat. It introduces AgentLure, a new benchmark, and ARGUS, an effective defense that significantly improves agent robustness against such attacks. This work is crucial for building secure and reliable LLM agents in real-world applications.

Original Abstract

The rise of Large Language Model (LLM) agents, augmented with tool use, skills, and external knowledge, has introduced new security risks. Among them, prompt injection attacks, where adversaries embed malicious instructions into the agent workflow, have emerged as the primary threat. However, existing benchmarks and defenses are fundamentally limited as they assume context-insensitive settings in which the agent works under a fully specified user instruction, and the attacks are straightforward and context-independent. As a result, they fail to capture real-world deployments where agent behavior usually depends on dynamic context, not just the user prompt, and adversaries can adapt their attacks to different context. Similarly, existing defenses built on this narrow threat model overlook the nature of real-world agent delegation. In this paper, we present AgentLure, a benchmark that captures context-dependent tasks and context-aware prompt injection attacks. AgentLure spans four agentic domains and eight attack vectors across diverse attack surfaces. Our evaluation shows that existing defenses often struggle in this setting, yielding poor performance against such attacks in agentic systems. To address this limitation, we propose ARGUS, a defense mechanism that enforces provenance-aware decision auditing for LLM agents. ARGUS constructs an influence provenance graph to track how untrusted context propagates into agent decisions and verify whether a decision is justified by trustworthy evidence before execution. Our evaluation shows ARGUS reduces attack success rate to 3.8% while preserving 87.5% task utility, significantly outperforming existing defenses and remaining robust against adaptive white-box adversaries.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.