Abstract

Cross-silo federated learning (FL) allows participants to collaboratively train a machine learning model while ensuring no data leaves their premises. As such, it has great potentials in many sensitive data-driven scenarios, such as finance and intelligent medicine. To further preserve the privacy of the participants' local models, existing industrial FL frameworks use additively homomorphic encryption (HE) to mask the local gradients during model aggregation. However, the enormous computation and communication overhead caused by HE is infeasible in practice. In this paper, we develop a framework called FLZip, which substantially reduces the overhead caused by HE based on gradient aware compression. In FLZip, instead of encrypting individual gradients, each client first filters insignificant gradients by considering the magnitude of the gradients in each layer independently. To allow aggregation to be performed on ciphertexts of the sparsified gradients, FLZip uses a key-value pair encoding scheme. Next, to counter the accuracy loss as a result of sparsification, FLZip also utilizes an error accumulation mechanism. Compared with BatchCrypt, a state-of-the-art FL framework with HE, extensive experiments show that FLZip dramatically reduces the encryption and decryption operations by 6.4× and 13.1 × and shrinks the network footprints to and from the server by 5.9× and 12.5× respectively, without damaging the model accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.