Abstract

Federated learning (FL) is an emerging decentralized machine learning method that allows multiple clients to participate in a joint mission by merging the local models into a global model. While FL provides the ultimate data privacy for each participant, the service providers can hardly verify the validity of local datasets, which gives malicious clients an opportunity to undermine the functionality of the global model, i.e., perpetrate the backdoor attack. To find the potential attackers among the clients, we propose a topological data analysis tool called Persistence Homology (PH). PH reveals the correlation between topological properties and the neurons' status, which tells us whether the model is well generalized or overfits on some specific samples. We trained a classifier based on the PH features of neural network models, eventually composing a secure federated learning mechanism. The results illustrated that our method can detect malicious clients with different types of backdoor attacks with high accuracy, even under the highly unbalanced non-i.i.d. data distribution condition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call