Stealthy Backdoor Attacks against LLMs Based on Natural Style Triggers
Jiali Wei, Ming Fan, Guoheng Sun, Xicheng Zhang, Haijun Wang + 1 more
TLDR
BadStyle introduces a stealthy, style-based backdoor attack framework for LLMs, using natural triggers and an auxiliary loss for stable payload injection.
Key contributions
- Uses an LLM to generate natural, style-level poisoned samples with imperceptible triggers.
- Introduces an auxiliary target loss for stable payload injection during fine-tuning.
- Achieves high attack success rates and strong stealthiness across seven victim LLMs.
- Consistently evades input-level defenses and bypasses output-level defenses.
Why it matters
Existing LLM backdoor attacks suffer from explicit triggers and unreliable payload injection. BadStyle addresses these by creating stealthy, style-based triggers and stabilizing payload delivery. This work highlights new vulnerabilities, urging better defense mechanisms against sophisticated, natural-looking attacks.
Original Abstract
The growing application of large language models (LLMs) in safety-critical domains has raised urgent concerns about their security. Many recent studies have demonstrated the feasibility of backdoor attacks against LLMs. However, existing methods suffer from three key shortcomings: explicit trigger patterns that compromise naturalness, unreliable injection of attacker-specified payloads in long-form generation, and incompletely specified threat models that obscure how backdoors are delivered and activated in practice. To address these gaps, we present BadStyle, a complete backdoor attack framework and pipeline. BadStyle leverages an LLM as a poisoned sample generator to construct natural and stealthy poisoned samples that carry imperceptible style-level triggers while preserving semantics and fluency. To stabilize payload injection during fine-tuning, we design an auxiliary target loss that reinforces the attacker-specified target content in responses to poisoned inputs and penalizes its emergence in benign responses. We further ground the attack in a realistic threat model and systematically evaluate BadStyle under both prompt-induced and PEFT-based injection strategies. Extensive experiments across seven victim LLMs, including LLaMA, Phi, DeepSeek, and GPT series, demonstrate that BadStyle achieves high attack success rates (ASRs) while maintaining strong stealthiness. The proposed auxiliary target loss substantially improves the stability of backdoor activation, yielding an average ASR improvement of around 30% across style-level triggers. Even in downstream deployment scenarios unknown during injection, the implanted backdoor remains effective. Moreover, BadStyle consistently evades representative input-level defenses and bypasses output-level defenses through simple camouflage.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.