ArXiv TLDR

DisAgg: Distributed Aggregators for Efficient Secure Aggregation in Federated Learning

🐦 Tweet
2605.13708

Haaris Mehmood, Giorgos Tatsis, Dimitrios Alexopoulos, Karthikeyan Saravanan, Jie Xu + 2 more

cs.CRcs.DCcs.LG

TLDR

DisAgg uses distributed client aggregators to securely and efficiently aggregate updates in federated learning, achieving a 4.6x speedup over OPA.

Key contributions

  • DisAgg uses a committee of client "Aggregators" to perform secure aggregation.
  • Clients secret-share updates to Aggregators, which compute partial sums.
  • Eliminates local masking and expensive homomorphic encryption, reducing client computation.
  • Achieves 4.6x speedup over OPA for large-scale federated learning tasks.

Why it matters

Federated learning needs privacy without sacrificing efficiency. DisAgg provides a novel approach to secure aggregation by offloading computation to a distributed committee of clients. This significantly reduces cryptographic overhead and speeds up large-scale FL deployments, making privacy-preserving FL more practical.

Original Abstract

Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure-aggregation schemes protect privacy against an honest-but-curious server, but existing approaches often suffer from many communication rounds, heavy public-key operations, or difficulty handling client dropouts. Recent methods like One-Shot Private Aggregation (OPA) cut rounds to a single server interaction per FL iteration, yet they impose substantial cryptographic and computational overhead on both server and clients. We propose a new protocol called DisAgg that leverages a small committee of clients called Aggregators to perform the aggregation itself: each client secret-shares its update vector to Aggregators, which locally compute partial sums and return only aggregated shares for server-side reconstruction. This design eliminates local masking and expensive homomorphic encryption, reducing endpoint computation while preserving privacy against a curious server and a limited fraction of colluding clients. By leveraging optimal trade-offs between communication and computation costs, DisAgg processes 100k-dimensional update vectors from 100k 5G clients with a 4.6x speedup compared to OPA, the previous best protocol.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.