Abstract

To ensure no private information is leaked in the aggregation phase in federated learning (FL), many frameworks use homomorphic encryption (HE) to mask local model updates. However, the heavy overheads of these frameworks make them unsuitable for cross-device FL, where the clients are a huge number of mobile and edge devices with limited computing resources. Even worse, some of them also fail to manage the dynamic changes of clients. To overcome these shortcomings, we propose a threshold multi-key HE scheme tMK-CKKS and design an efficient and robust privacy-preserving FL framework. Robustness means that our framework allows clients to join in or drop out during the training process. Besides, because our tMK-CKKS scheme can pack multiple messages in a single ciphertext, our framework significantly reduces the computation and communication overhead. Moreover, the threshold mechanism in tMK-CKKS ensures that our framework can resist collusion attacks between the server and no more than t (threshold value) curious internal clients. Finally, we implement our framework in FedML and conduct extensive experiments to evaluate our framework. Utility evaluations on 6 benchmark datasets show that our framework can protect privacy without sacrificing the model accuracy. Efficiency evaluations on 4 typical deep learning models demonstrate that: our framework can speed up the computation by at least 1.21times over xMK-CKKS-based framework, 15.84times over Batchcrypt-based framework, and 20.30times over CRT-Paillier-based framework. Our framework can reduce the communication burden by at least 8.61 MB over Batchcrypt-based framework, 35.36 MB over xMK-CKKS-based framework and 42.58 MB over CRT-Paillier-based framework. The advantages in both computation and communication expand with the size of deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call