Abstract

Deep neural networks have outperformed traditional machine learning approaches for many tasks, and are the tool of choice in many fields. However, directly applying these techniques in fields that deal with private data is challenging. The reason is that a third-party, which organizations may not trust, usually needs to centrally collect private data. To overcome this challenge, researches have proposed distributed training algorithms that allow multiple users to collaboratively train their local deep learning models without sharing private datasets. However, these approaches are vulnerable to recently proposed attacks where a malicious user can replicate private data from another user by compromising the collaborative training algorithm. In this paper, we propose a privacy-preserving distributed deep learning algorithm that allows a user to leverage the private datasets from a group of users while protecting its privacy. Our algorithm prohibits this user from ever sharing the parameters of its model, and thus it prevents malicious users from compromising the training and replicating the user's private data. We conduct extensive experiments and observe that our algorithm can achieve a model accuracy of 95.18 %, which is the same accuracy that previous approaches that are vulnerable to attacks can achieve.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.