ArXiv TLDR

TEMPLATEFUZZ: Fine-Grained Chat Template Fuzzing for Jailbreaking and Red Teaming LLMs

🐦 Tweet
2604.12232

Qingchao Shen, Zibo Xiao, Lili Huang, Enwei Hu, Yongqiang Tian + 1 more

cs.CRcs.AIcs.SE

TLDR

TEMPLATEFUZZ is a novel fuzzing framework that systematically exploits vulnerabilities in LLM chat templates, achieving high jailbreak success rates with minimal accuracy loss.

Key contributions

  • Designs element-level mutation rules to generate diverse chat template variants.
  • Proposes a heuristic search strategy to amplify attack success rate while preserving model accuracy.
  • Integrates an active learning-based oracle for efficient and accurate jailbreak evaluation.
  • Achieves 98.2% ASR on open-source LLMs and 90% on commercial LLMs.

Why it matters

LLMs face significant security risks from jailbreak attacks. This paper introduces a novel method to systematically uncover these vulnerabilities in chat templates, an often-overlooked attack surface. By achieving high attack success rates across various LLMs, TEMPLATEFUZZ highlights critical security gaps and provides a powerful tool for red teaming and improving LLM safety.

Original Abstract

Large Language Models (LLMs) are increasingly deployed across diverse domains, yet their vulnerability to jailbreak attacks, where adversarial inputs bypass safety mechanisms to elicit harmful outputs, poses significant security risks. While prior work has primarily focused on prompt injection attacks, these approaches often require resource-intensive prompt engineering and overlook other critical components, such as chat templates. This paper introduces TEMPLATEFUZZ, a fine-grained fuzzing framework that systematically exposes vulnerabilities in chat templates, a critical yet underexplored attack surface in LLMs. Specifically, TEMPLATEFUZZ (1) designs a series of element-level mutation rules to generate diverse chat template variants, (2) proposes a heuristic search strategy to guide the chat template generation toward the direction of amplifying the attack success rate (ASR) while preserving model accuracy, and (3) integrates an active learning-based strategy to derive a lightweight rule-based oracle for accurate and efficient jailbreak evaluation. Evaluated on twelve open-source LLMs across multiple attack scenarios, TEMPLATEFUZZ achieves an average ASR of 98.2% with only 1.1% accuracy degradation, outperforming state-of-the-art methods by 9.1%-47.9% in ASR and 8.4% in accuracy degradation. Moreover, even on five industry-leading commercial LLMs where chat templates cannot be specified, TEMPLATEFUZZ attains a 90% average ASR via chat template-based prompt injection attacks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.