ArXiv TLDR

Model-Agnostic Lifelong LLM Safety via Externalized Attack-Defense Co-Evolution

🐦 Tweet
2605.13411

Xiaozhe Zhang, Chaozhuo Li, Hui Liu, Shaocheng Yan, Bingyu Yan + 2 more

cs.CRcs.CL

TLDR

EvoSafety introduces a novel framework for lifelong, model-agnostic LLM safety via externalized attack-defense co-evolution to counter adversarial prompts.

Key contributions

  • Attack policy uses an adversarial skill library for continuous, saturation-resistant vulnerability probing.
  • Defense employs a lightweight, memory-augmented auxiliary model for efficient, transferable, model-agnostic safety.
  • Achieves 99.61% defense success in Guard mode, outperforming baselines with significantly fewer parameters.
  • Defense policy operates in both Steer (activates intrinsic defenses) and Guard (direct filtering) modes.

Why it matters

Current LLM safety methods suffer from rapid attack saturation and non-transferable defenses. EvoSafety offers a novel, efficient, and model-agnostic solution. This framework significantly improves LLM robustness against adversarial attacks while preserving performance on benign queries.

Original Abstract

Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and post-training in a closed, policy-centric loop, causing attack discovery to suffer from rapid saturation and limiting the exposure of novel failure modes, while leaving defenses inefficient, rigid, and difficult to transfer across victim models. To this end, we propose EvoSafety, an LLM safety framework built around persistent, inspectable, and reusable external structures. For red teaming, EvoSafety equips the attack policy with an adversarial skill library, enabling continued vulnerability probing through simple library expansion after saturation, while supporting the evolution of adversarial vectors. For defense learning, EvoSafety replaces model-specific safety fine-tuning with a lightweight auxiliary defense model augmented with memory retrieval. This enables efficient, transferable, and model-agnostic safety improvements, while allowing robustness to be enhanced solely through memory updates. With a single training procedure, the defense policy can operate in both Steer and Guard modes: the former activates the victim model's intrinsic defense mechanisms, while the latter directly filters harmful inputs. Extensive experiments demonstrate the superiority of EvoSafety: in Guard mode, it achieves a 99.61% defense success rate, outperforming Qwen3Guard-8B by 14.13% with only 37.5% of its parameters, while preserving reasoning performance on benign queries. Warning: This paper contains potentially harmful text.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.