ArXiv TLDR

Cross-Lingual Jailbreak Detection via Semantic Codebooks

🐦 Tweet
2604.25716

Shirin Alanova, Bogdan Minko, Sabrina Sadiekh, Evgeniy Kokuykin

cs.CLcs.AI

TLDR

This paper proposes a training-free method for cross-lingual jailbreak detection in LLMs using semantic codebooks, showing varied success.

Key contributions

  • Addresses cross-lingual jailbreak vulnerabilities in LLMs.
  • Proposes a training-free, language-agnostic method using an English jailbreak codebook.
  • Achieves high separability (AUC up to 0.99) on canonical jailbreak templates.
  • Performance degrades significantly (AUC 0.60-0.70) under distribution shift/diverse unsafe prompts.

Why it matters

LLM safety is English-centric, creating critical vulnerabilities in multilingual deployments. This paper offers a training-free, language-agnostic approach to detect cross-lingual jailbreaks, revealing its effectiveness on structured attacks but limitations on diverse, real-world prompts.

Original Abstract

Safety mechanisms for large language models (LLMs) remain predominantly English-centric, creating systematic vulnerabilities in multilingual deployment. Prior work shows that translating malicious prompts into other languages can substantially increase jailbreak success rates, exposing a structural cross-lingual security gap. We investigate whether such attacks can be mitigated through language-agnostic semantic similarity without retraining or language-specific adaptation. Our approach compares multilingual query embeddings against a fixed English codebook of jailbreak prompts, operating as a training-free external guardrail for black-box LLMs. We conduct a systematic evaluation across four languages, two translation pipelines, four safety benchmarks, three embedding models, and three target LLMs (Qwen, Llama, GPT-3.5). Our results reveal two distinct regimes of cross-lingual transfer. On curated benchmarks containing canonical jailbreak templates, semantic similarity generalizes reliably across languages, achieving near-perfect separability (AUC up to 0.99) and substantial reductions in absolute attack success rates under strict low-false-positive constraints. However, under distribution shift - on behaviorally diverse and heterogeneous unsafe benchmarks - separability degrades markedly (AUC $\approx$ 0.60-0.70), and recall in the security-critical low-FPR regime drops across all embedding models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.