ArXiv TLDR

BadSkill: Backdoor Attacks on Agent Skills via Model-in-Skill Poisoning

🐦 Tweet
2604.09378

Guiyao Tie, Jiawen Shi, Pan Zhou, Lichao Sun

cs.CRcs.AI

TLDR

BadSkill introduces backdoor attacks on agent skills by poisoning embedded models, revealing a new and distinct supply-chain risk in agent ecosystems.

Key contributions

  • Introduces BadSkill, a backdoor attack targeting model-in-skill threat surfaces in agent ecosystems.
  • Adversaries embed backdoor-fine-tuned models in seemingly benign skills, activating with specific triggers.
  • Achieves up to 99.5% attack success rate while maintaining high benign-side accuracy.
  • Demonstrates effectiveness across diverse model scales and text perturbation types.

Why it matters

This paper identifies a critical, previously unaddressed supply-chain risk in agent ecosystems where third-party skills bundle learned models. It highlights the urgent need for stronger provenance verification and behavioral vetting of these skill artifacts to prevent malicious hidden behaviors.

Original Abstract

Agent ecosystems increasingly rely on installable skills to extend functionality, and some skills bundle learned model artifacts as part of their execution logic. This creates a supply-chain risk that is not captured by prompt injection or ordinary plugin misuse: a third-party skill may appear benign while concealing malicious behavior inside its bundled model. We present BadSkill, a backdoor attack formulation that targets this model-in-skill threat surface. In BadSkill, an adversary publishes a seemingly benign skill whose embedded model is backdoor-fine-tuned to activate a hidden payload only when routine skill parameters satisfy attacker-chosen semantic trigger combinations. To realize this attack, we train the embedded classifier with a composite objective that combines classification loss, margin-based separation, and poison-focused optimization, and evaluate it in an OpenClaw-inspired simulation environment that preserves third-party skill installation and execution while enabling controlled multi-model study. Our benchmark spans 13 skills, including 8 triggered tasks and 5 non-trigger control skills, with a combined main evaluation set of 571 negative-class queries and 396 trigger-aligned queries. Across eight architectures (494M--7.1B parameters) from five model families, BadSkill achieves up to 99.5\% average attack success rate (ASR) across the eight triggered skills while maintaining strong benign-side accuracy on negative-class queries. In poison-rate sweeps on the standard test split, a 3\% poison rate already yields 91.7\% ASR. The attack remains effective across the evaluated model scales and under five text perturbation types. These findings identify model-bearing skills as a distinct model supply-chain risk in agent ecosystems and motivate stronger provenance verification and behavioral vetting for third-party skill artifacts.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.