Abstract

Convolutional neural networks (CNNs) have been widely employed in image and video recognition tasks due to their superior performance. With the increasing popularity of machine learning as a service, CNN model owners often outsource their models to the cloud to provide inference services. However, the user data and outsourced models processed by cloud-based CNN inference applications suffer from privacy leakage issues. Therefore, several studies have proposed different privacy-preserving CNN architectures. Most of these approaches are based on heavyweight cryptographic methods, which require significant computational resources. In this study, we propose a novel scheme, a Fast Privacy-Preserving Outsourced Convolutional Neural Network with Low Bandwidth (FPCNN), to provide secure inference services on the cloud using outsourced data and models. Specifically, for linear CNN layers, FPCNN significantly reduces the bandwidth requirements between servers by applying scaled noise models and scaled secret sharing. For nonlinear CNN layers, we first optimize the multiplication operation under secret sharing to save bandwidth. Subsequently, based on the nonlinear input characteristics of CNNs, we introduce a secure parallel rectified linear unit (ReLU) computation protocol that significantly reduces the number of communication rounds. Finally, the experimental results demonstrate that FPCNN achieves a speedup of 2.66× and a 32.8× reduction in bandwidth compared with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call