Abstract

Federated learning (FL) has recently emerged as an attractive distributed machine learning paradigm for harnessing the distributed data in edge computing. Its salient feature is that the individual datasets can stay local all the time during the training process and only model updates need to be exchanged for aggregation. Despite being intriguing, FL is also known to be confronted with critical security and privacy concerns. Firstly, even sharing the model updates/gradients can incur privacy leakages of the local datasets. Secondly, there could be malicious clients who may attempt to launch poisoning attacks so as to compromise the utility of trained models. Driven by such challenges, various research efforts have been proposed to secure FL. However, most existing works have just considered either privacy preservation or robustness against poisoning attacks. In this paper, we propose a new robust and privacy-preserving FL framework RoPPFL for edge computing applications, which supports hierarchical federated learning with privacy preservation as well as robust aggregation against poisoning attacks. RoPPFL delicately bridges local differential privacy for privacy protection and similarity-based robust aggregation for resistance to malicious clients. We formally analyze the convergence and privacy guarantees of RoPPFL. Extensive experiments demonstrate the superior performance of RoPPFL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call