Abstract

Federated Learning is a distributed machine learning framework, which mainly adopts cloud-edge collaborative computing mode and supports multiple participants to train models without directly sharing local data. However, participants’ sensitive information may still be leaked through their gradients. Besides, incorrect aggregated results returned by the aggregation server may reduce the effect of joint modeling. This paper proposes a privacy-preserving and verifiable federated learning method called PPVerifier to support privacy protection and verification of aggregated results in cloud-edge collaborative computing environment. By integrating Paillier homomorphic encryption and random number generation technique, all gradients and their ciphertexts can be protected. Meanwhile, an additive secret sharing scheme is introduced to resist potential collusion attacks among the aggregation server, malicious participants, and edge nodes. Moreover, a verification scheme based on discrete logarithm is proposed, which can not only verify the correctness of aggregated results, but also discover lazy aggregation servers, and the verification overhead can be reduced by over half compared with bilinear aggregate signature method. Finally, theoretical analysis and experiments conducted on MNIST dataset prove that our proposed method can achieve gradient protection and correctness verification of the aggregated results with higher efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.