Secure and Privacy-Preserving Vertical Federated Learning
Shan Jin, Sai Rahul Rachuri, Yizhen Wang, Anderson C. A. Nascimento, Yiwei Cai
TLDR
This paper introduces a novel framework for secure and privacy-preserving vertical federated learning using distributed aggregation, MPC, and DP.
Key contributions
- Proposes an end-to-end privacy-preserving framework for vertical federated learning.
- Distributes the FL aggregator role across multiple servers using Secure Multiparty Computation (MPC).
- Applies Differential Privacy (DP) to the final model for enhanced output privacy.
- Optimizes MPC usage, significantly reducing computation and communication costs for model updates.
Why it matters
This paper addresses critical privacy and security challenges in vertical federated learning, enabling collaborative model training without exposing sensitive data. Its optimized approach makes privacy-preserving FL more practical and efficient for real-world applications. This advances secure AI development.
Original Abstract
We propose a novel end-to-end privacy-preserving framework, instantiated by three efficient protocols for different deployment scenarios, covering both input and output privacy, for the vertically split scenario in federated learning (FL), where features are split across clients and labels are not shared by all parties. We do so by distributing the role of the aggregator in FL into multiple servers and having them run secure multiparty computation (MPC) protocols to perform model and feature aggregation and apply differential privacy (DP) to the final released model. While a naive solution would have the clients delegating the entirety of training to run in MPC between the servers, our optimized solution, which supports purely global and also global-local models updates with privacy-preserving, drastically reduces the amount of computation and communication performed using multiparty computation. The experimental results also show the effectiveness of our protocols.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.