Abstract

With the advance of machine learning technology and especially the explosive growth of big data, federated learning, which allows multiple participants to jointly train a high-quality global machine learning model, has gained extensive attention. However, in federated learning, it has been proved that inference attacks could reveal sensitive information from both local updates and global model parameters, which threatens user privacy greatly. Aiming at the challenge, in this paper, a privacy-preserving and lossless federated learning scheme, named CORK, is proposed for deep neural network. With CORK, multiple participants can train a global model securely and accurately with the assistance of an aggregation server. Specifically, we first design a drop-tolerant secure aggregation algorithm FTSA, which ensures the confidentiality of local updates. Then, a lossless model perturbation mechanism PTSP is proposed to protect sensitive data in global model parameters. Furthermore, the neuron pruning operation in PTSP can reduce the scale of models, which thus improves the computation and communication efficiency significantly. Detailed security analysis shows that CORK can resist inference attacks on both local updates and global model parameters. In addition, CORK is implemented with real MNIST and CIFAR-10 datasets, and the experimental results demonstrate that CORK is indeed effective and efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call