Abstract

Cross-silo federated learning (FL) allows participants to collaboratively train a machine learning model while ensuring no data leaves their premises. As such, it has great potentials in many sensitive data-driven scenarios, such as finance and intelligent medicine. To further preserve the privacy of the participants' local models, existing industrial FL frameworks use additively homomorphic encryption (HE) to mask the local gradients during model aggregation. However, the enormous computation and communication overhead caused by HE is infeasible in practice. In this paper, we develop a framework called FLZip, which substantially reduces the overhead caused by HE based on gradient aware compression. In FLZip, instead of encrypting individual gradients, each client first filters insignificant gradients by considering the magnitude of the gradients in each layer independently. To allow aggregation to be performed on ciphertexts of the sparsified gradients, FLZip uses a key-value pair encoding scheme. Next, to counter the accuracy loss as a result of sparsification, FLZip also utilizes an error accumulation mechanism. Compared with BatchCrypt, a state-of-the-art FL framework with HE, extensive experiments show that FLZip dramatically reduces the encryption and decryption operations by 6.4× and 13.1 × and shrinks the network footprints to and from the server by 5.9× and 12.5× respectively, without damaging the model accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call