Abstract

With more regulations tackling the protection of users’ privacy-sensitive data in recent years, access to such data has become increasingly restricted. A new decentralized training paradigm, known as Federated Learning (FL), enables multiple clients located at different geographical locations to learn a machine learning model collaboratively without sharing their data. While FL has recently emerged as a promising solution to preserve users’ privacy, this new paradigm’s potential security implications may hinder its widespread adoption. The existing FL protocols exhibit new unique vulnerabilities that adversaries can exploit to compromise the trained model. FL is often preferred in learning environments where security and privacy are the key concerns. Therefore, it is crucial to raise awareness of the consequences resulting from the new threats to FL systems. To date, the security of traditional machine learning systems has been widely examined. However, many open challenges and complex questions are still surrounding FL security. In this paper, we bridge the gap in FL literature by providing a comprehensive survey of the unique security vulnerabilities exposed by the FL ecosystem. We highlight the vulnerabilities sources, key attacks on FL, defenses, as well as their unique challenges, and discuss promising future research directions towards more robust FL.

Highlights

  • The emerging Artificial Intelligence (AI) market is accompanied by an unprecedented growth of cloud-based AI solutions

  • 6) NON-MALICIOUS FAILURE We recall that each training round in Federated Learning (FL) involves broadcasting the global model to the clients, local gradients computation, and client reports to the central aggregator

  • Engineers typically only focus on FL robustness to the specific type of adversarial examples incorporated during training, potentially leaving the deployed model vulnerable to other forms of adversarial noise

Read more

Summary

INTRODUCTION

The emerging Artificial Intelligence (AI) market is accompanied by an unprecedented growth of cloud-based AI solutions. By sharing the model parameters instead of data, FL introduces new attack surfaces at training time by enhancing the capabilities of the adversary [16]. In FL, the attacker controls the entire training process for a few participants, which includes data processing, local training pipelines, and model updates These potent adversary capabilities allow malicious clients to carry out devastating attacks. The server can observe client updates, tamper with the aggregation process, and control the view of the participants of the global model and compression parameters. Compromised clients corrupt the FL training process by either exploiting model parameters or training data to craft an attack. 6) NON-MALICIOUS FAILURE We recall that each training round in FL involves broadcasting the global model to the clients, local gradients computation, and client reports to the central aggregator. Engineers typically only focus on FL robustness to the specific type of adversarial examples incorporated during training, potentially leaving the deployed model vulnerable to other forms of adversarial noise

ATTACKS IN FEDERATED LEARNING
DEFENSES IN FEDERATED LEARNING
MOVING TARGET DEFENSE
VIII. CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.