ArXiv TLDR

DeepSeek Robustness Against Semantic-Character Dual-Space Mutated Prompt Injection

🐦 Tweet
2604.12548

Junyu Ren, Xingjian Pan, Wensheng Gan, Philip S. Yu

cs.CR

TLDR

PromptFuzz-SC introduces a dual-space mutation framework to evaluate LLM robustness against prompt injection, showing composite attacks are most effective.

Key contributions

  • Introduced PromptFuzz-SC, a novel dual-space mutation framework for evaluating LLM prompt injection robustness.
  • Combines semantic transformations (paraphrasing) with character-level obfuscation (zero-width insertion).
  • Employs a hybrid search strategy (epsilon-greedy + hill-climbing) to find high-quality adversarial prompts.
  • Demonstrates dual-space mutation achieves superior attack performance on DeepSeek, boosting MSR by up to 12.5%.

Why it matters

This paper addresses a critical gap in LLM security by proposing a comprehensive dual-space prompt injection framework. Its findings underscore the necessity of composite attack strategies for robust red-teaming and designing more effective multi-layer defense mechanisms against evolving threats.

Original Abstract

Prompt injection has emerged as a critical security threat to large language models (LLMs), yet existing studies predominantly focus on single-dimensional attack strategies, such as semantic rewriting or character-level obfuscation, which fail to capture the combined effects of multi-space perturbations in realistic scenarios. In addition, systematic black-box robustness evaluations of recent Chinese LLMs, such as DeepSeek, remain limited. To address these gaps, we propose PromptFuzz-SC, a semantic-character dual-space mutation framework for evaluating LLM robustness against prompt injection. The framework integrates semantic transformations (e.g., paraphrasing and word-order perturbation) with character-level obfuscation (e.g., zero-width insertion and encoding-based mutation), forming a unified and extensible mutation operator library. A hybrid search strategy combining epsilon-greedy exploration and hill-climbing refinement is adopted to efficiently discover high-quality adversarial prompts. We further introduce a unified evaluation protocol based on three metrics: misuse success rate (MSR), Average Queries to Success (AQS), and Stealth. Experimental results on DeepSeek demonstrate that dual-space mutation achieves the strongest overall attack performance among the evaluated strategies, attaining the highest mean MSR (0.189), peak MSR (0.375), and mean Stealth. Compared with semantic-only and character-only mutation, it improves mean MSR by 12.5% and 5.6%, respectively. While not consistently minimizing query cost, the proposed method achieves competitive best-case efficiency and maintains strong imperceptibility, indicating a more favorable balance between attack effectiveness and concealment. These findings highlight the importance of composite mutation strategies for robust red-teaming of LLMs and provide practical insights for the design of multi-layer defense mechanisms.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.