Abstract
Federated learning (FL) has nourished a promising method for data silos, which enables multiple participants to construct a joint model collaboratively without centralizing data. The security and privacy considerations of FL are focused on ensuring the robustness of the global model and the privacy of participants’ information. However, the FL paradigm is under various security threats from the adversary aggregator and participants. Therefore, it is necessary to comprehensively identify and classify potential threats to provide a theoretical basis for FL with security guarantees. In this paper, a unique classification of attacks, which reviews state-of-the-art research on security and privacy issues for FL, is constructed from the perspective of malicious threats based on different computing parties. Specifically, we categorize attacks with respect to performed by aggregator and participant, highlighting the Deep Gradients Leakage attacks and Generative Adversarial Networks attacks. Following an overview of attack methods, we discuss the primary mitigation techniques against security risks and privacy breaches, especially the application of blockchain and Trusted Execution Environments. Finally, several promising directions for future research are discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.