ArXiv TLDR

SkillSafetyBench: Evaluating Agent Safety under Skill-Facing Attack Surfaces

🐦 Tweet
2605.12015

Chang Jin, An Wang, Zeming Wei, Kai Wang, Biaojie Zeng + 5 more

cs.CRcs.AIcs.CLcs.LGcs.MA

TLDR

SkillSafetyBench evaluates how reusable skills in LLM agents create new attack surfaces, revealing vulnerabilities beyond model-level alignment.

Key contributions

  • Introduces SkillSafetyBench, a runnable benchmark for evaluating skill-mediated safety failures in LLM agents.
  • Features 155 adversarial cases across 47 tasks, 6 risk domains, and 30 safety categories.
  • Uses case-specific rule-based verifiers to detect unsafe actions induced by skill materials.

Why it matters

This paper highlights a critical, overlooked vulnerability in LLM agents: safety failures originating from reusable skills, not just user input. It provides a crucial benchmark to test and improve agent safety beyond model-level alignment, emphasizing the importance of skill interpretation and execution environments.

Original Abstract

Reusable skills are becoming a common interface for extending large language model agents, packaging procedural guidance with access to files, tools, memory, and execution environments. However, this modularity introduces attack surfaces that are largely missed by existing safety evaluations: even when the user request is benign, task-relevant skill materials or local artifacts can steer an agent toward unsafe actions. We present SkillSafetyBench, a runnable benchmark for evaluating such skill-mediated safety failures. SkillSafetyBench includes 155 adversarial cases across 47 tasks, 6 risk domains, and 30 safety categories, each evaluated with a case-specific rule-based verifier. Experiments with multiple CLI agents and model backends show that localized non-user attacks can consistently induce unsafe behavior, with distinct failure patterns across domains, attack methods, and scaffold-model pairings. Our findings suggest that agent safety depends not only on model-level alignment, but also on how agents interpret skills, trust workflow context, and act through executable environments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.