Abstract

Federated learning allows a large number of participants to collaboratively train a global model without sharing participant's local data. Participants train local models with their local data and send gradients to the cloud server for aggregation. Unfortunately, as a third party, the cloud server cannot be fully trusted. Existing research has shown that a compromised cloud server can extract sensitive information of participant's local data from gradients. In addition, it can even forge the aggregation result to corrupt the global model without being detected. Therefore, in a secure federated learning system, both the privacy and aggregation correctness of the uploaded gradients should be guaranteed. In this article, we propose a secure and efficient federated learning scheme with verifiable weighted average aggregation. By adopting the masking technique to encrypt both weighted gradients and data size, our scheme can support the privacy-preserving weighted average aggregation of gradients. Moreover, we design the verifiable aggregation tag and propose an efficient verification method to validate the weighted average aggregation result, which greatly improves the performance of the aggregation verification. Security analysis shows that our scheme is provably secure. Extensive experiments demonstrate the efficiency of our scheme compared with the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.