Abstract

Distributed deep learning (DDL) naturally provides a privacy-preserving solution to enable multiple parties to jointly learn a deep model without explicitly sharing the local datasets. However, the existing privacy-preserving DDL schemes still suffer from severe information leakage and/or lead to significant increase of the communication cost. In this work, we design a privacy-preserving DDL framework such that all the participants can keep their local datasets private with low communication and computational cost, while still maintaining the accuracy and efficiency of the learned model. By adopting an effective secret sharing strategy, we allow each participant to split the intervening parameters in the training process into shares and upload an aggregation result to the cloud server. We can theoretically show that the local dataset of a particular participant can be well protected against the honest-but-curious cloud server as well as the other participants, even under the challenging case that the cloud server colludes with some participants. Extensive experimental results are provided to validate the superiority of the proposed secret sharing based distributed deep learning (SSDDL) framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call