Abstract

Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution. These attacks can not only cause models to fail in specific tasks, but also infer private information. While previous surveys have identified the risks, listed the attack methods available in the literature or provided a basic taxonomy to classify them, they mainly focused on the risks in the training phase of FL. In this work, we survey the threats, attacks and defenses to FL throughout the whole process of FL in three phases, including Data and Behavior Auditing Phase, Training Phase and Predicting Phase. We further provide a comprehensive analysis of these threats, attacks and defenses, and summarize their issues and taxonomy. Our work considers security and privacy of FL based on the viewpoint of the execution process of FL. We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase. Finally, we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL.

Highlights

  • As smart cities grow in popularity, the amounts of multisource heterogeneous data generated by various organizations and individuals have become increasingly diverse

  • Federated Learning (FL) itself is still riddled with attack surfaces that arouse the risk of data privacy and model robustness

  • We identify the issues and provide the taxonomy of FL based on the multi-phases it works with, including data and behavior auditing phase, training phase and predicting phase

Read more

Summary

Introduction

As smart cities grow in popularity, the amounts of multisource heterogeneous data generated by various organizations and individuals have become increasingly diverse. We analyze the security and privacy threats according to the multi-phase framework of the FL execution, including Data and Behavior Auditing, Training and Predicting. If this line is breached, a malicious local worker can use low-quality or poisoned data to decrease the performance of the global model or even corrupt the model. In this phase, the local workers and central server are exposed to existing system, software, and network security threats.

Design model
Limitation
Attack Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call