ArXiv TLDR

Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain

🐦 Tweet
2604.08407

Hanzhi Liu, Chaofan Shou, Hongbo Wen, Yanju Chen, Ryan Jingyang Fang + 1 more

cs.CR

TLDR

This paper reveals critical security vulnerabilities in LLM API routers, demonstrating malicious code injection and secret exfiltration attacks.

Key contributions

  • First systematic study of malicious intermediary attacks on LLM API routers.
  • Formalizes threat model with payload injection and secret exfiltration attack classes.
  • Found 9 active malicious routers and 17 touching canary credentials among 428 tested.
  • Developed 'Mine' proxy to demonstrate attacks and evaluated three client-side defenses.

Why it matters

LLM agents increasingly rely on third-party API routers, which this paper identifies as a critical, unstudied attack surface. It reveals how these intermediaries can inject malicious code or exfiltrate secrets, posing significant risks to the LLM supply chain. This research underscores the urgent need for cryptographic integrity and client-side defenses.

Original Abstract

Large language model (LLM) agents increasingly rely on third-party API routers to dispatch tool-calling requests across multiple upstream providers. These routers operate as application-layer proxies with full plaintext access to every in-flight JSON payload, yet no provider enforces cryptographic integrity between client and upstream model. We present the first systematic study of this attack surface. We formalize a threat model for malicious LLM API routers and define two core attack classes, payload injection (AC-1) and secret exfiltration (AC-2), together with two adaptive evasion variants: dependency-targeted injection (AC-1.a) and conditional delivery (AC-1.b). Across 28 paid routers purchased from Taobao, Xianyu, and Shopify-hosted storefronts and 400 free routers collected from public communities, we find 1 paid and 8 free routers actively injecting malicious code, 2 deploying adaptive evasion triggers, 17 touching researcher-owned AWS canary credentials, and 1 draining ETH from a researcher-owned private key. Two poisoning studies further show that ostensibly benign routers can be pulled into the same attack surface: a leaked OpenAI key generates 100M GPT-5.4 tokens and more than seven Codex sessions, while weakly configured decoys yield 2B billed tokens, 99 credentials across 440 Codex sessions, and 401 sessions already running in autonomous YOLO mode. We build Mine, a research proxy that implements all four attack classes against four public agent frameworks, and use it to evaluate three deployable client-side defenses: a fail-closed policy gate, response-side anomaly screening, and append-only transparency logging.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.