AdaBFL: Multi-Layer Defensive Adaptive Aggregation for Bzantine-Robust Federated Learning
Zehui Tang, Yuchen Liu, Feihu Huang
TLDR
AdaBFL introduces a multi-layer adaptive aggregation to robustly defend federated learning against diverse Byzantine attacks without server-side data.
Key contributions
- Proposes AdaBFL, a multi-layer adaptive aggregation for Byzantine-robust federated learning.
- Features a novel three-layer defense mechanism that adaptively adjusts to complex attacks.
- Provides convergence guarantees for non-convex, non-iid data settings.
- Outperforms comparable algorithms in experiments across multiple datasets.
Why it matters
Federated learning is crucial but vulnerable to attacks. AdaBFL offers a robust, adaptive defense mechanism that doesn't rely on server-side data, addressing key limitations of prior methods. This enhances the security and reliability of collaborative AI model training.
Original Abstract
Federated learning (FL) is a popular distributed learning paradigm in machine learning, which enables multiple clients to collaboratively train models under the guidance of a server without exposing private client data. However, FL's decentralized nature makes it vulnerable to poisoning attacks, where malicious clients can submit corrupted models to manipulate the system. To counter such attacks, although various Byzantine-robust methods have been proposed, these methods struggle to provide balanced defense against multiple types of attacks or rely on possessing the dataset in the server. To deal with these drawbacks, thus, we propose an effective multi-layer defensive adaptive aggregation for Bzantine-robust federated learning (AdaBFL) based on a novel three-layer defensive mechanism, which can adaptively adjust the weights of defense algorithms to counter complex attacks. Moreover, we provide convergence properties of our AdaBFL method under the non-convex setting on non-iid data. Comprehensive experiments across multiple datasets validate the superiority of our AdaBFL over the comparable algorithms.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.