Abstract

The growing distribution of deep learning models to individuals’ devices on sensitive healthcare data introduces challenging privacy and security problems when computation is being operated on an untrusted server. Homomorphic encryption (HE) is one of the appropriate cryptographic techniques to provide secure machine learning computation by directly computing over encrypted data, so that allows the data owner and model owner to outsource processing of sensitive information to an untrusted server without leaking any information about the data. However, most current HE schemes only support limited arithmetic operations, which significantly hinder their applications to implement a secure deep learning algorithm, especially on the nonlinear activation function of a deep neural network. In this paper, we develop a novel HE-friendly deep neural network, named REsidue ACTivation HE (ReActHE), to implement a precise and privacy-preserving algorithm with a non-approximating HE scheme on the activation function. We consider a residue activation strategy with a scaled power activation function in a deep neural network for HE-friendly nonlinear activation. Moreover, we propose a residue activation network structure to constrain the latent space in the training process to alleviate the optimization difficulty. We comprehensively evaluate the proposed ReActHE method using various biomedical datasets and widely-used image datasets. Our results demonstrate that ReActHE outperforms other alternative solutions to secure machine learning with HE and achieves low approximation errors in classification and regression tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call