Decentralized Machine Learning with Centralized Performance Guarantees via Gibbs Algorithms
Yaiza Bermudez, Samir Perlaza, Iñaki Esnaola
TLDR
This paper achieves centralized machine learning performance in decentralized settings by sharing Gibbs measures instead of raw data.
Key contributions
- Achieves centralized ML performance in decentralized settings without sharing local datasets.
- Utilizes an ERM-RER framework with forward-backward communication and shared local Gibbs measures.
- Client k's Gibbs measure serves as a reference for client k+1, encoding prior information.
- Requires specific scaling of regularization factors with local sample sizes for optimal performance.
Why it matters
This research introduces a novel decentralized learning paradigm that shifts from data sharing to sharing local inductive biases. It offers a privacy-preserving method to achieve optimal centralized performance, paving the way for more secure and efficient collaborative AI systems.
Original Abstract
In this paper, it is shown, for the first time, that centralized performance is achievable in decentralized learning without sharing the local datasets. Specifically, when clients adopt an empirical risk minimization with relative-entropy regularization (ERM-RER) learning framework and a forward-backward communication between clients is established, it suffices to share the locally obtained Gibbs measures to achieve the same performance as that of a centralized ERM-RER with access to all the datasets. The core idea is that the Gibbs measure produced by client~$k$ is used, as reference measure, by client~$k+1$. This effectively establishes a principled way to encode prior information through a reference measure. In particular, achieving centralized performance in the decentralized setting requires a specific scaling of the regularization factors with the local sample sizes. Overall, this result opens the door to novel decentralized learning paradigms that shift the collaboration strategy from sharing data to sharing the local inductive bias via the reference measures over the set of models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.