A Systematic Security Evaluation of OpenClaw and Its Variants
Yuhang Wang, Haichang Gao, Zhenxing Niu, Zhaoxiang Liu, Wenjing Zhang + 2 more
TLDR
This paper systematically evaluates OpenClaw-series AI agents, revealing substantial security vulnerabilities beyond underlying models, emphasizing lifecycle-wide governance.
Key contributions
- Systematically assessed six OpenClaw-series AI agent frameworks.
- Constructed a 205-case benchmark for unified agent security evaluation.
- Revealed all agents have substantial vulnerabilities, riskier than underlying models.
- Highlighted reconnaissance as a common weakness, with distinct high-risk profiles like credential leakage.
Why it matters
This paper is crucial as tool-augmented AI agents introduce significant security risks beyond their underlying models. It demonstrates how early weaknesses can amplify into system-level failures, highlighting the urgent need for lifecycle-wide security governance for intelligent agent frameworks.
Original Abstract
Tool-augmented AI agents substantially extend the practical capabilities of large language models, but they also introduce security risks that cannot be identified through model-only evaluation. In this paper, we present a systematic security assessment of six representative OpenClaw-series agent frameworks, namely OpenClaw, AutoClaw, QClaw, KimiClaw, MaxClaw, and ArkClaw, under multiple backbone models. To support this study, we construct a benchmark of 205 test cases covering representative attack behaviors across the full agent execution lifecycle, enabling unified evaluation of risk exposure at both the framework and model levels. Our results show that all evaluated agents exhibit substantial security vulnerabilities, and that agentized systems are significantly riskier than their underlying models used in isolation. In particular, reconnaissance and discovery behaviors emerge as the most common weaknesses, while different frameworks expose distinct high-risk profiles, including credential leakage, lateral movement, privilege escalation, and resource development. These findings indicate that the security of modern agent systems is shaped not only by the safety properties of the backbone model, but also by the coupling among model capability, tool use, multi-step planning, and runtime orchestration. We further show that once an agent is granted execution capability and persistent runtime context, weaknesses arising in early stages can be amplified into concrete system-level failures. Overall, our study highlights the need to move beyond prompt-level safeguards toward lifecycle-wide security governance for intelligent agent frameworks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.