Abstract
Federated learning (FL) is a distributed deep learning framework that has become increasingly popular in recent years. Essentially, FL supports numerous participants and the parameter server to co-train a deep learning model through shared gradients without revealing the private training data. Recent studies, however, have shown that a potential adversary (either the parameter server or participants) can recover private training data from the shared gradients, and such behavior is called gradient leakage attacks (GLAs). In this study, we first present an overview of FL systems and outline the GLA philosophy. We classify the existing GLAs into two paradigms: optimizationbased and analytics-based attacks. In particular, the optimizationbased approach defines the attack process as an optimization problem, whereas the analytics-based approach defines the attack as a problem of solving multiple linear equations. We present a comprehensive review of the state-of-the-art GLA algorithms followed by a detailed comparison. Based on the observations of the shortcomings of the existing optimization-based and analyticsbased methods, we devise a new generation-based GLA paradigm. We demonstrate the superiority of the proposed GLA in terms of data reconstruction performance and efficiency, thus posing a greater potential threat to federated learning protocols. Finally, we pinpoint a variety of promising future directions for GLA.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.