ArXiv TLDR

SafeReview: Defending LLM-based Review Systems Against Adversarial Hidden Prompts

🐦 Tweet
2604.26506

Yuan Xin, Yixuan Weng, Minjun Zhu, Ying Ling, Chengwei Qin + 4 more

cs.CLcs.CR

TLDR

SafeReview defends LLM-based peer review systems against adversarial hidden prompts using a co-evolving generator-defender framework.

Key contributions

  • Identifies adversarial hidden prompts as a critical threat to LLM-based peer review systems.
  • Proposes SafeReview, a novel adversarial framework with co-optimized Generator and Defender models.
  • Utilizes an IR-GAN inspired loss function for dynamic co-evolution, enhancing defense robustness.
  • Demonstrates significantly enhanced resilience against novel and evolving adversarial prompt attacks.

Why it matters

This paper tackles the critical threat of adversarial prompts manipulating LLM-based peer review outcomes. Its novel co-evolving defense framework offers significantly enhanced resilience against sophisticated attacks. This is vital for securing the integrity and trustworthiness of academic evaluation in the age of AI.

Original Abstract

As Large Language Models (LLMs) are increasingly integrated into academic peer review, their vulnerability to adversarial prompts -- adversarial instructions embedded in submissions to manipulate outcomes -- emerges as a critical threat to scholarly integrity. To counter this, we propose a novel adversarial framework where a Generator model, trained to create sophisticated attack prompts, is jointly optimized with a Defender model tasked with their detection. This system is trained using a loss function inspired by Information Retrieval Generative Adversarial Networks, which fosters a dynamic co-evolution between the two models, forcing the Defender to develop robust capabilities against continuously improving attack strategies. The resulting framework demonstrates significantly enhanced resilience to novel and evolving threats compared to static defenses, thereby establishing a critical foundation for securing the integrity of peer review.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.