Abstract

The recent advances in deep learning have improved the state of the art in artificial intelligence, and one of the most important stimulants of this success is the large volume of data. Although collaborative learning can improve the learning accuracy by incorporating more datasets into the learning process, serious privacy issues have also emerged from the training data. In this paper, we propose a new framework for privacy-preserving multi-party deep learning in cloud computing, where the large volume of training data is distributed among many parties. Our system enables multiple parties to learn the same neural network model, which is generated based on the aggregate dataset, and the privacy of the local dataset and learning model is protected against the cloud server. Extensive analysis shows that our schemes satisfy the security requirements of verifiability and privacy. Our implementation and experiments demonstrate that our system has a manageable computational efficiency and can be applied to a wide range of privacy-sensitive areas in deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call