Abstract

Federated Learning (FL) is a distributed learning paradigm in which clients collaboratively train a global model with shared gradients while preserving the privacy of local data. Recent researches found that an adversary can reveal private local data with gradient inversion attacks. However, setting a large batchsize in local training can defense against these attacks effectively by confusing gradients computed on private data. Although some advanced methods have been proposed to improve the performance of gradient inversion attacks on the large-batch training, they are limited to specific model architectures (e.g., fully connected neural networks (FCNNs) with ReLU layers). To address these problems, we propose a novel gradient inversion attack to compromise privacy in large-batch FL via model poisoning. We poison clients' models with malicious parameters, which are constructed purposely to mitigate the confusion of aggregated gradients. For FCNNs, the private data can be perfectly recovered by analyzing gradients of the first fully connected (FC) layer. For convolutional neural networks (CNNs), we extend our proposed method as a hybrid approach, consisting of the analytic method and the optimization-based method. We first recover the feature maps after the convolutional layers and then reconstruct private data by minimizing the loss of data-wise feature map matching. We demonstrate the effectiveness of our proposed method on four datasets and show that it outperforms previous methods for large-batch FL (e.g., 64, 128, 256) and models with different activation layers (e.g., ReLU, Sigmoid and Tanh).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call