Abstract

Privacy-preserving machine learning (PPML) has been gaining a lot of attention in recent years, and several techniques have been proposed to achieve PPML. Cryptography-based PPML approaches such as Fully Homomorphic Encryption (FHE) and Secure Multiparty Computation (SMC) have been extensively investigated. However, Functional Encryption (FE), which is a newer paradigm, has not been studied as much, and PPML approaches based on FE are in the early stages. Most of the existing FE-based PPML approaches are focused on privacy-preserving inference, and the research work focused on FE-based privacy-preserving training requires a very high training time. To alleviate this issue, this paper presents a privacy-preserving neural network framework using FE that facilitates both training and inference on encrypted data. Our proposed approach is twofold. First, we use Inner-product Functional Encryption (IPFE) and Function-hiding Inner Product Encryption (FHIPE) schemes to develop secure activation functions. To the best of our knowledge, this is the first work to demonstrate the application of FHIPE in PPML. Second, we develop a PPML framework called FENet using the secure activation functions to perform secure forward propagation and backpropagation. Our experimental results show that our framework can successfully train a neural network on the encrypted MNIST dataset with an overall accuracy of 95%. Our work outperforms the state-of-the-art research work in this area, both in terms of reducing the training time by 28× (for IPFE) and 2× (for FHIPE) and improving security.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call