Abstract
The Internet-of-Things (IoT) are used everywhere in our daily lives. IoT applications provide us with many useful functionalities such as preventing fires, detecting and tracking objects, controlling and reporting the changes in/outside the environments, and capturing images/videos in our homes, roads, and offices. For example, the images data gathered through the smart sensors of autonomous vehicles can serve in various applications such as traffic monitoring, prediction of road conditions, and classification of objects. Image classification with deep neural networks (DNNs) on the cloud is such a machine learning task and has great market potentials for IoT applications. Nevertheless, the deployment of these “smart” IoT devices and applications can raise the risks of security issues. It still suffers from the challenges of relieving IoT devices from excessive computation burdens, such as data encryption, feature extraction, and image classification. In this paper, we propose and implement an indistinguishability-chosen plaintext attack secure image classification framework with DNN for IoT Applications. The framework performs a secure image classification on the cloud without the IoT device’s constant interaction. We propose and implement a real number computation mechanism and a divide-and-conquer mechanism for the secure evaluation of linear functions in DNNs, as well as a set of unified ideal protocols for the evaluation of non-linear functions in DNNs. The information about the image contents, the private DNNs model parameters and the intermediate results is strictly concealed by the conjunctive use of the lattice-based homomorphic scheme and 2-PC secure computation techniques. A pre-trained deep convolutional neural network model, i.e., Visual Geometry Group (VGG-16), is used to extract the deep features of an image. The comprehensive experimental results show that our framework is efficient and accurate. In addition, we evaluate the security of our framework by performing the white-box membership inference attack which is believed to be the most powerful attack on DNNs models. The failure of the attack indicates that our framework is practical secure.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Ambient Intelligence and Humanized Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.