Abstract

As a distributed learning paradigm, federated learning is supposed to protect data privacy without exchanging users’ local data. Even so, the gradient inversion attack, in which the adversary can reconstruct the original data from shared training gradients, has been widely deemed as a severe threat. Nevertheless, most existing researches are confined to impractical assumptions and narrow range of applications. To mitigate these shortcomings, we propose a comprehensive framework for gradient inversion attack, with well-designed algorithms for image and label reconstruction. For image reconstruction, we fully utilize the generative image prior, which derives from wide-used generative models, to improve the reconstructed results, by additional means of iterative optimization on mixed spaces and gradient-free optimizer. For label reconstruction, we design an adaptive recovery algorithm regarding real data distribution, which can adjust previous attacks to more complex scenarios. Moreover, we incorporate a gradient approximation method to efficiently fit our attack for FedAvg scenario. We empirically verify our attack framework using benchmark datasets and ablation studies, considering loose assumptions and complicated circumstances. We hope this work can greatly reveal the necessity of privacy protection in federated learning, while urge more effective and robust defense mechanisms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call