Abstract
Federated learning is a distributed machine learning approach that allows the neural network to be trained without exposing private user data. Despite its advantages, federated learning schemes still face two critical security challenges: user privacy disclosure and Byzantine robustness. The adversary may try to infer the private data from the trained local gradients or compromise the global model update. To tackle the above challenges, we propose PPBRFL, a privacy-preserving Byzantine-robust federated learning scheme. To resist Byzantine attacks, we design a novel Byzantine-robust aggregation method based on
 cosine similarity, which can guarantee the global model update and improve the model’s classification accuracy. Furthermore, we introduce a reward and penalty mechanism that considers users’ behavior to mitigate the impact of Byzantine users on the global model. To protect user privacy, we utilize symmetric homomorphic encryption to encrypt the users’ trained local models, which requires low computation cost while maintaining model accuracy. We conduct the experimental assessment of the performance of PPBRFL. The experimental results show that PPBRFL maintains model classification accuracy while ensuring privacy preservation and Byzantine robustness compared to traditional federated learning scheme.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.