Abstract

Federated Learning (FL) is a novel machine learning paradigm that enables multiple participants to collaboratively train a machine learning model by aggregating local gradients from each client without sharing sensitive train data with each other. The clients’ gradients plaintext transmission makes FL systems vulnerable to inference attacks, which aim to infer clients’ data from their model updates. Masking local gradients with additive homomorphic encryption (especially the Paillier scheme) before gradients aggregate is a straightforward way to ensure security. Unfortunately, this method needs traversing and encrypting gradients element-wisely, resulting in low computation efficiency and high communication cost. In this paper, we present BatchAgg, an efficient aggregation protocol for FL utilizing ciphertext packing technique provided by approximate homomorphic encryption scheme. Instead of encrypting gradients individually, BatchAgg encrypts a gradient vector into one ciphertext and homomorphically computes batch operations. Specifically, BatchAgg is built on the federated averaging protocol. In detail, we implement a federated image classification model for datasets horizontal split as a baseline and replace the Paillier-based aggregation protocol with BatchAgg to accelerate model training with lower communication cost. Evaluation results show that BatchAgg achieves 60× training speedup while reducing the communication cost by 57% compared with Paillier under the same security level. Moreover, BatchAgg can be easily integrated into existing models while assuring security throughout federated training without causing performance loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call