XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
Israt Jahan Mouri, Muhammad Ridowan, Muhammad Abdullah Adnan
TLDR
XFED introduces a novel non-collusive model poisoning attack that bypasses state-of-the-art defenses in Federated Learning, highlighting new security vulnerabilities.
Key contributions
- Introduces the "non-collusive attack model" for independent FL adversaries.
- Proposes XFED, the first aggregation-agnostic, non-collusive model poisoning attack.
- XFED successfully bypasses eight state-of-the-art FL defenses.
- Outperforms six existing model poisoning attacks across diverse datasets.
Why it matters
This paper reveals a critical vulnerability in Federated Learning by demonstrating effective model poisoning without attacker collusion. It shows current defenses are inadequate against a more practical threat model, urging the development of truly robust security mechanisms for FL systems.
Original Abstract
Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the feasibility of this threat model, we propose \textbf{XFED}, the first aggregation-agnostic, non-collusive model poisoning attack. Our empirical evaluation across six benchmark datasets shows that XFED bypasses eight state-of-the-art defenses and outperforms six existing model poisoning attacks. These findings indicate that FL systems are substantially less secure than previously believed and underscore the urgent need for more robust and practical defense mechanisms.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.