Abstract

In this paper, we develop a privacy-preserving decentralized aggregation protocol for federated learning. We formulate the distributed aggregation protocol with the Alternating Direction Method of Multiplier (ADMM) algorithm and examine its privacy challenges. Unlike prior works that use differential privacy or homomorphic encryption for privacy, we develop a protocol that controls communication among participants in each round of aggregation to minimize privacy leakage. We establish the protocol's privacy guarantee against an honest-but-curious adversary. We also propose an efficient algorithm to construct such a communication pattern, which is inspired by combinatorial block design theory. Our secure aggregation protocol based on the novel group-based communication pattern leads to an efficient algorithm for federated training with privacy guarantees. We evaluate our federated training algorithm on computer vision and natural language processing models over benchmark datasets with 9 and 15 distributed sites. Experimental results demonstrate the privacy-preserving capabilities of our algorithm while maintaining learning performance comparable to the baseline centralized federated learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.