Abstract

Federated learning (FL) enables data owners to train a global model with shared gradients while keeping private training data locally. However, recent research demonstrated that the adversary may infer private training data of clients from the exchanged local gradients, e.g., having deep leakage from gradients (DLGs). Many existing privacy-preserving approaches take usage of differential privacy (DP) to guarantee privacy. Nevertheless, the widely used privacy budget of DP (e.g., evenly distribution) leads to a sharp decline of model accuracy. To improve the model accuracy, some schemes only consider allocating the privacy budget to the fully connected layers. However, we reveal that the adversary may still reconstruct the private training data by adopting the DLG attack with the gradients of convolutional layers. In this article, we propose a fine-grained DP federated learning (DPFL) scheme, which guarantees privacy and remains high model performance simultaneously. Specifically, inspired by the methods that measure the importance of layers in deep learning, we propose a fine-grained method to allocate noise according to the importance value of layers in order to remain high model performance. Besides, we combine an active client selection strategy with DPFL and perform fine-tuning with a public data set on the server to further ensure the model performance. We evaluate DPFL under both independent and identically distributed (i.i.d) and non-i.i.d data settings to show that our method can achieve similar accuracy as the plain FL (e.g., FedAvg). We also demonstrate that our DPFL can resist the DLG attack to verify its privacy guarantee.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call