Abstract

Distributed or Collaborative Deep Learning, has recently gained more recognition due to its major advantage of allowing two or more learning participants to contribute and enjoy better accuracy from large and varied training datasets. Despite this advantage, it also presents key privacy issues that have to be managed. In this survey paper, an overview of Distributed or Collaborative Deep Learning has been presented. We first classify Collaborative or Distributed Deep Learning into Direct, Indirect and Peer-to-peer approaches and indicate some of their related privacy issues. We then discuss general cryptographic algorithms and other techniques that can be used for privacy preservation and also indicate their advantages and disadvantages in the Distributed Deep Learning setting. Furthermore, some fundamental theories employed in this area of research have been presented which paves the way for a comprehensive review and comparison of existing privacy approaches, most of which are based on Homomorphic Encryption. Finally, we highlight some challenges in this research domain and propose future directions. Our work reveals the following: Collaborative Deep Learning is more associated with the training stage of Deep Learning than the inference stage. Homomorphic Encryption provides a good approach for preserving the privacy of training datasets in the Collaborative Deep Learning and can become more popular if some problems associated with its use such as increased communication and computation costs are brought low. Privacy preservation in the Collaborative Deep Learning has great future prospects and attempts should be made towards providing more quantum robust and collusion resistant solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call