Abstract

Deep learning inference, providing the model utilization of deep learning, is usually deployed as a cloud-based framework for the resource-constrained client. However, the existing cloud-based frameworks suffer from severe information leakage or lead to significant increase of communication cost. In this work, we address the problem of privacy-preserving deep learning inference in a way that both the privacy of the input data and the model parameters can be protected with low communication and computational costs. Additionally, the user can verify the correctness of results with small overhead, which is very important for critical application. Specifically, by designing secure sub-protocols, we introduce a new layer to collaboratively perform the secure computations involved in the inference. With the cooperation of the secret sharing, we inject the verifiable data into the input, enabling us to check the correctness of the returned inference results. Theoretical analyses and extensive experimental results over MNIST and CIFAR10 datasets are provided to validate the superiority of our proposed privacy-preserving and verifiable deep learning inference (PVDLI) framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call