Transient Turn Injection: Exposing Stateless Multi-Turn Vulnerabilities in Large Language Models
TLDR
Transient Turn Injection (TTI) is a new multi-turn attack that exploits stateless LLM moderation by distributing adversarial intent across isolated interactions.
Key contributions
- Introduces Transient Turn Injection (TTI), a novel multi-turn attack exploiting stateless LLM moderation.
- TTI distributes adversarial intent across isolated interactions, evading defenses that rely on persistent context.
- Evaluates TTI across major commercial and open-source LLMs, revealing significant variations in resilience.
- Uncovers new model-specific vulnerabilities, particularly in medical and high-stakes domains.
Why it matters
LLMs are used in sensitive workflows, making adversarial robustness critical. This paper highlights a new class of multi-turn vulnerabilities that bypass current stateless defenses. It emphasizes the need for holistic, context-aware defenses and continuous testing to secure future LLM deployments.
Original Abstract
Large language models (LLMs) are increasingly integrated into sensitive workflows, raising the stakes for adversarial robustness and safety. This paper introduces Transient Turn Injection(TTI), a new multi-turn attack technique that systematically exploits stateless moderation by distributing adversarial intent across isolated interactions. TTI leverages automated attacker agents powered by large language models to iteratively test and evade policy enforcement in both commercial and open-source LLMs, marking a departure from conventional jailbreak approaches that typically depend on maintaining persistent conversational context. Our extensive evaluation across state-of-the-art models-including those from OpenAI, Anthropic, Google Gemini, Meta, and prominent open-source alternatives-uncovers significant variations in resilience to TTI attacks, with only select architectures exhibiting substantial inherent robustness. Our automated blackbox evaluation framework also uncovers previously unknown model specific vulnerabilities and attack surface patterns, especially within medical and high stakes domains. We further compare TTI against established adversarial prompting methods and detail practical mitigation strategies, such as session level context aggregation and deep alignment approaches. Our study underscores the urgent need for holistic, context aware defenses and continuous adversarial testing to future proof LLM deployments against evolving multi-turn threats.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.