Abstract

With the rise of neural network, deep learning technology is more and more widely used in various fields. Federated learning is one of the training types in deep learning. In federated learning, each user and cloud server (CS) cooperatively train a unified neural network model. However, in this process, the neural network system may face some more challenging problems exemplified by the threat of user privacy disclosure, the error of server’s returned results, and the difficulty of implementing the trusted center in practice. In order to solve the above problems simultaneously, we propose a verifiable federated training scheme that supports privacy protection over deep neural networks. In our scheme, the key exchange technology is used to remove the trusted center, the double masking protocol is used to ensure that the privacy of users is not disclosed, and the tag aggregation method is used to ensure the correctness of the results returned by the server. Formal security analysis and comprehensive performance evaluation indicate that the proposed scheme is secure and efficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call