Abstract

Machine learning has become ubiquitous technology and is being used in many aspects of our daily lives. However, as machine learning works with a huge amount of data, many privacy concerns of users' data have arisen. The privacy issues are exacerbated when machine learning is performed in the cloud computing environment, i.e., machine learning as a service (MLaaS) settings. In recent years, the problem of privacy-preserving machine learning has been studied extensively, and different approaches have been proposed. Cryptography provides one possible solution, and several techniques based on homo-morphic encryption, secure multiparty computation, and newly growing functional encryption have been developed to address the privacy issues in machine learning. In this paper, we focus on privacy-preserving deep neural networks based on functional encryption. Most of the existing work based on functional encryption suffer from computational complexity and security issue. This paper proposes a novel methodology to compute the activation functions in a neural network in a secure and privacy-preserving manner using function-hiding inner product encryption (FHIPE). To our knowledge, this is the first work to focus on function-hiding inner product functional encryption for privacy-preserving machine learning. Also, we conduct experiments to speed up functional encryption in terms of computing the inner products. Our experimental results show 95x speedup for inner-product functional encryption-based secure activation and 10x speedup for FHIPE based secure activation function compared to earlier work in this field.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call