Black-Box Skill Stealing Attack from Proprietary LLM Agents: An Empirical Study
Zihan Wang, Rui Zhang, Yu Liu, Chi Liu, Qingchuan Zhao + 2 more
TLDR
This paper empirically studies black-box skill stealing from proprietary LLM agents, demonstrating easy extraction and highlighting overlooked copyright risks.
Key contributions
- Presents the first empirical study on black-box skill stealing from proprietary LLM agent systems.
- Develops an automated prompt generation agent to systematically extract hidden skills.
- Demonstrates that skills can be extracted from commercial agents with as few as 3 interactions.
- Proposes defenses across input, inference, and output stages, though attacks remain inexpensive and effective.
Why it matters
This research uncovers a critical and largely overlooked copyright risk for proprietary LLM agent skills, which are central to a growing skill economy. It highlights the urgent need for more robust defense strategies to protect valuable intellectual property in AI systems.
Original Abstract
LLM agents increasingly rely on skills to encapsulate reusable capabilities via progressively disclosed instructions. High-quality skills inject expert knowledge into general-purpose models, improving performance on specialized tasks. This quality and ease of dissemination drive the emergence of a skill economy: free skill marketplaces already report 90368 published skills, while paid marketplaces report more than 2000 listings and over $100,000 in creator earnings. Yet this growing marketplace also creates a new attack surface, as adversaries can interact with public agent to extract hidden proprietary skill content. We present the first empirical study of black-box skill stealing against LLM agent systems. To study this threat, we first derive an attack taxonomy from prior prompt-stealing methods and build an automated stealing prompt generation agent. This agent starts from model-generated seed prompts, expands them through scenario rationalization and structure injection, and enforces diversity via embedding filtering. This process yields a reproducible pipeline for evaluating agent systems. We evaluate such attacks across 3 commercial agent architectures and 5 LLMs. Our results show that agent skills can be extracted with only 3 interactions, posing a serious copyright risk. To mitigate this threat, we design defenses across three stages of the agent pipeline: input, inference, and output. Although these defenses achieve strong results, the attack remains inexpensive and readily automatable, allowing an adversary to launch repeated attempts with different variants; only one successful attempt is sufficient to compromise the protected skill. Overall, our findings suggest that these copyright risks are largely overlooked across proprietary agent ecosystems. We therefore advocate for more robust defense strategies that provide stronger protection guarantees.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.