GAMMAF: A Common Framework for Graph-Based Anomaly Monitoring Benchmarking in LLM Multi-Agent Systems
Pablo Mateo-Torrejón, Alfonso Sánchez-Macián
TLDR
GAMMAF is an open-source framework for benchmarking graph-based anomaly detection in LLM multi-agent systems, improving security and efficiency.
Key contributions
- Introduces GAMMAF, an open-source benchmarking framework for LLM multi-agent system anomaly detection.
- Generates synthetic multi-agent interaction datasets as attributed graphs for training and evaluation.
- Benchmarks defense models by isolating adversarial nodes during live inference rounds.
- Demonstrates utility, scalability, and efficiency, showing defense improves system integrity and reduces costs.
Why it matters
LLM multi-agent systems face growing security threats, but evaluating defenses is hard without standard tools. GAMMAF fills this gap by providing a robust platform to benchmark graph-based anomaly detection models. This enables researchers to develop more effective defenses, ultimately securing these complex AI systems and reducing operational costs.
Original Abstract
The rapid integration of Large Language Models (LLMs) into Multi-Agent Systems (MAS) has significantly enhanced their collaborative problem-solving capabilities, but it has also expanded their attack surfaces, exposing them to vulnerabilities such as prompt infection and compromised inter-agent communication. While emerging graph-based anomaly detection methods show promise in protecting these networks, the field currently lacks a standardized, reproducible environment to train these models and evaluate their efficacy. To address this gap, we introduce Gammaf (Graph-based Anomaly Monitoring for LLM Multi-Agent systems Framework), an open-source benchmarking platform. Gammaf is not a novel defense mechanism itself, but rather a comprehensive evaluation architecture designed to generate synthetic multi-agent interaction datasets and benchmark the performance of existing and future defense models. The proposed framework operates through two interdependent pipelines: a Training Data Generation stage, which simulates debates across varied network topologies to capture interactions as robust attributed graphs, and a Defense System Benchmarking stage, which actively evaluates defense models by dynamically isolating flagged adversarial nodes during live inference rounds. Through rigorous evaluation using established defense baselines (XG-Guard and BlindGuard) across multiple knowledge tasks (such as MMLU-Pro and GSM8K), we demonstrate Gammaf's high utility, topological scalability, and execution efficiency. Furthermore, our experimental results reveal that equipping an LLM-MAS with effective attack remediation not only recovers system integrity but also substantially reduces overall operational costs by facilitating early consensus and cutting off the extensive token generation typical of adversarial agents.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.