Abstract

Deep learning and its variant techniques have surpassed classical machine algorithms due to their high performance gaining remarkable results and are used in a broad range of applications. However, adopting deep learning models over the cloud introduces privacy and security issues for data owners and model owners, including computational inefficiency, expansion in ciphertext, error accumulation, security and usability trade-offs, and deep learning model attacks. With homomorphic encryption, computations on encrypted data can be performed without disclosing its content. This research examines the basic concepts of homomorphic encryption limitations, benefits, weaknesses, possible applications, and development tools concentrating on neural networks. Additionally, we looked at systems that integrate neural networks with homomorphic encryption in order to maintain privacy. Furthermore, we classify modifications made on neural network models and architectures that make them computable via homomorphic encryption and the effect of these changes on performance. This paper introduces a thorough review focusing on the privacy of homomorphic cryptosystems targeting neural network models and identifies existing solutions, analyzes potential weaknesses, and makes recommendations for further research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call