Abstract

Federated learning is a distributed machine learning approach that allows the neural network to be trained without exposing private user data. Despite its advantages, federated learning schemes still face two critical security challenges: user privacy disclosure and Byzantine robustness. The adversary may try to infer the private data from the trained local gradients or compromise the global model update. To tackle the above challenges, we propose PPBRFL, a privacy-preserving Byzantine-robust federated learning scheme. To resist Byzantine attacks, we design a novel Byzantine-robust aggregation method based on
 cosine similarity, which can guarantee the global model update and improve the model’s classification accuracy. Furthermore, we introduce a reward and penalty mechanism that considers users’ behavior to mitigate the impact of Byzantine users on the global model. To protect user privacy, we utilize symmetric homomorphic encryption to encrypt the users’ trained local models, which requires low computation cost while maintaining model accuracy. We conduct the experimental assessment of the performance of PPBRFL. The experimental results show that PPBRFL maintains model classification accuracy while ensuring privacy preservation and Byzantine robustness compared to traditional federated learning scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call