Abstract

With the rise of neural network, deep learning technology is more and more widely used in various fields. Federated learning is one of the training types in deep learning. In federated learning, each user and cloud server (CS) cooperatively train a unified neural network model. However, in this process, the neural network system may face some more challenging problems exemplified by the threat of user privacy disclosure, the error of server’s returned results, and the difficulty of implementing the trusted center in practice. In order to solve the above problems simultaneously, we propose a verifiable federated training scheme that supports privacy protection over deep neural networks. In our scheme, the key exchange technology is used to remove the trusted center, the double masking protocol is used to ensure that the privacy of users is not disclosed, and the tag aggregation method is used to ensure the correctness of the results returned by the server. Formal security analysis and comprehensive performance evaluation indicate that the proposed scheme is secure and efficient.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.