Do Agents Dream of Root Shells? Partial-Credit Evaluation of LLM Agents in Capture The Flag Challenges
Ali Al-Kaswan, Maksim Plotnikov, Maxim Hájek, Roland Vízner, Arie van Deursen + 1 more
TLDR
DeepRed is a new benchmark for evaluating LLM agents in realistic Capture The Flag (CTF) challenges, revealing current agents' limited capabilities.
Key contributions
- DeepRed is an open-source benchmark for evaluating LLM agents in realistic CTF challenges.
- Agents operate in a Kali Linux environment with terminal tools against target CTF challenges.
- Introduces a partial-credit scoring method based on challenge-specific checkpoints from writeups.
- Benchmarked 10 LLMs on 10 CTF challenges, finding the best model achieved only 35% completion.
Why it matters
DeepRed offers a vital, realistic benchmark for evaluating LLM agents in cybersecurity CTF challenges, moving beyond binary pass/fail. It reveals current agents' significant limitations, guiding future research to develop more capable and adaptable AI for offensive security tasks.
Original Abstract
Large Language Model (LLM) agents are increasingly proposed for autonomous cybersecurity tasks, but their capabilities in realistic offensive settings remain poorly understood. We present DeepRed, an open-source benchmark for evaluating LLM-based agents on realistic Capture The Flag (CTF) challenges in isolated virtualized environments. DeepRed places an agent in a Kali attacker environment with terminal tools and optional web search, connected over a private network to a target challenge, and records full execution traces for analysis. To move beyond binary solved/unsolved outcomes, we introduce a partial-credit scoring method based on challenge-specific checkpoints derived from public writeups, together with an automated summarise-then-judge labelling pipeline for assigning checkpoint completion from logs. Using DeepRed, we benchmark ten commercially accessible LLMs on ten VM-based CTF challenges spanning different challenge categories. The results indicate that current agents remain limited: the best model achieves only 35% average checkpoint completion, performing strongest on common challenge types and weakest on tasks requiring non-standard discovery and longer-horizon adaptation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.