Abstract

Quantum machine learning is expected to utilize the potential advantages of quantum computing to advance the efficiency of machine learning. However, with the help of quantum cloud servers, ordinary users may confront the threat of privacy leakage of input data and models when performing the training or inference of quantum neural networks (QNNs). To address this problem, we present a new framework that allows the training and inference of delegated QNNs to be performed on encrypted data to protect the privacy of users’ data and models. This framework contains two models that are alternately trained: an encryptor and a predictor. The classical client first trains the encryptor defined by a classical neural network to map plaintext input data to vastly different ciphertext data. The ciphertext data is sent to the quantum cloud server to train the predictor defined by a QNN, which can indirectly predict the labels of plaintext data. With the trained encryptor and predictor, the client can send the encrypted data to the server for prediction and obtain almost equivalent prediction results. The proposed framework is applied to three types of QNN models, each dealing with low-dimensional tabular data, image data, and one-dimensional time series data, respectively. Experimental results show that the privacy protection method based on our framework can protect data and model privacy without degrading the performance of QNNs. The framework does not require users to have quantum capabilities and is suitable for protecting data and model privacy for various QNN models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call