Abstract

Organizations tend to collaboratively train the deep learning model over their combined datasets for a common benefit (e.g., better-trained model or learning a complicated model). However, due to the consideration about privacy leakage, organizations cannot share their data directly, especially related to sensitive domains. In this paper, a privacy-preserving collaborative deep learning mechanism, namely Sigma, is designed to allow participating organizations to train a collective model without exposing their local training data to the others. Specifically, a single-server-aided private collaborative architecture is introduced to achieve the private collaborative learning, which protects organizations&#x2019; data even if <inline-formula><tex-math notation="LaTeX">$n-1$</tex-math></inline-formula> out of <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula> participants colluded. We also design a practical protocol to perform the secure model training, which can resist the typical inference attack through the sharing information. After that, we propose a fair model releasing mechanism for participants and introduce differential privacy to prevent model stealing and membership inference attack. Furthermore, we prove that Sigma can ensure participants&#x2019; privacy preservation and analyze the communication overhead in theory. To evaluate the effectiveness and efficiency of Sigma, we conduct an experiment over two real-world datasets and the simulation results demonstrate that Sigma can efficiently achieve the collaborative model training and effectively resist the membership inference attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call