ArXiv TLDR

FedIDM: Achieving Fast and Stable Convergence in Byzantine Federated Learning through Iterative Distribution Matching

🐦 Tweet
2604.15115

He Yang, Dongyi Lv, Wei Xi, Song Ma, Hanlin Gu + 1 more

cs.LGcs.CR

TLDR

FedIDM improves Byzantine-robust federated learning by using distribution matching for fast, stable convergence and better utility against attacks.

Key contributions

  • Introduces FedIDM for robust federated learning with improved convergence.
  • Uses distribution matching to generate attack-tolerant condensed data.
  • Employs robust aggregation with negative contribution-based rejection.
  • Filters clients whose updates deviate or cause significant loss on condensed data.

Why it matters

Existing Byzantine FL methods often suffer from slow convergence and reduced model utility, especially with many malicious clients. FedIDM addresses these issues by robustly identifying and rejecting malicious updates. This leads to faster, more stable training and better model performance in challenging attack scenarios.

Original Abstract

Most existing Byzantine-robust federated learning (FL) methods suffer from slow and unstable convergence. Moreover, when handling a substantial proportion of colluded malicious clients, achieving robustness typically entails compromising model utility. To address these issues, this work introduces FedIDM, which employs distribution matching to construct trustworthy condensed data for identifying and filtering abnormal clients. FedIDM consists of two main components: (1) attack-tolerant condensed data generation, and (2) robust aggregation with negative contribution-based rejection. These components exclude local updates that (1) deviate from the update direction derived from condensed data, or (2) cause a significant loss on the condensed dataset. Comprehensive evaluations on three benchmark datasets demonstrate that FedIDM achieves fast and stable convergence while maintaining acceptable model utility, under multiple state-of-the-art Byzantine attacks involving a large number of malicious clients.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.