TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories
Yen-Shan Chen, Sian-Yao Huang, Cheng-Lin Yang, Yun-Nung Chen
TLDR
TraceSafe-Bench assesses LLM guardrails on multi-step tool-calling trajectories, revealing structural competence and architecture are key to mid-trajectory safety.
Key contributions
- Introduces TraceSafe-Bench, the first benchmark for evaluating LLM guardrails on multi-step tool-use trajectories.
- Shows guardrail efficacy relies more on structural data competence (e.g., JSON parsing) than semantic safety alignment.
- Finds model architecture influences risk detection more than model size, with general LLMs outperforming specialized guardrails.
- Demonstrates risk detection accuracy remains stable and can even improve across extended execution steps.
Why it matters
This paper highlights a critical gap in LLM safety, shifting focus from final outputs to intermediate agentic steps. Its findings are crucial for developing more robust guardrails, emphasizing the need to optimize for both structural reasoning and safety alignment. This will enable safer and more reliable autonomous LLM agents.
Original Abstract
As large language models (LLMs) evolve from static chatbots into autonomous agents, the primary vulnerability surface shifts from final outputs to intermediate execution traces. While safety guardrails are well-benchmarked for natural language responses, their efficacy remains largely unexplored within multi-step tool-use trajectories. To address this gap, we introduce TraceSafe-Bench, the first comprehensive benchmark specifically designed to assess mid-trajectory safety. It encompasses 12 risk categories, ranging from security threats (e.g., prompt injection, privacy leaks) to operational failures (e.g., hallucinations, interface inconsistencies), featuring over 1,000 unique execution instances. Our evaluation of 13 LLM-as-a-guard models and 7 specialized guardrails yields three critical findings: 1) Structural Bottleneck: Guardrail efficacy is driven more by structural data competence (e.g., JSON parsing) than semantic safety alignment. Performance correlates strongly with structured-to-text benchmarks ($ρ=0.79$) but shows near-zero correlation with standard jailbreak robustness. 2) Architecture over Scale: Model architecture influences risk detection performance more significantly than model size, with general-purpose LLMs consistently outperforming specialized safety guardrails in trajectory analysis. 3) Temporal Stability: Accuracy remains resilient across extended trajectories. Increased execution steps allow models to pivot from static tool definitions to dynamic execution behaviors, actually improving risk detection performance in later stages. Our findings suggest that securing agentic workflows requires jointly optimizing for structural reasoning and safety alignment to effectively mitigate mid-trajectory risks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.