Abstract

Nowadays, our lives have benefited from various vision-based applications, such as video surveillance, human identification and aided driving. Unauthorized access to the vision-related data greatly threatens users’ privacy, and many encryption schemes have been proposed to secure images and videos in those conventional scenarios. Neuromorphic vision sensor (NVS) is a brand new kind of bio-inspired sensor that can generate a stream of impulse-like events rather than synchronized image frames, which reduces the sensor’s latency and broadens the applications in surveillance and identification. However, the privacy issue related to NVS remains a significant challenge. For example, some image reconstruction and human identification approaches may expose privacy-related information from NVS events. This work is the first to investigate the privacy of NVS. We firstly analyze the possible security attacks to NVS, including grayscale image reconstruction and privacy-related classification. We then propose a dedicated encryption framework for NVS, which incorporates a 2D chaotic mapping to scramble the positions of events and flip their polarities. In addition, an updating score has been designed for controlling the frequency of execution, which supports efficient encryption on different platforms. Finally, extensive experiments have demonstrated that the proposed encryption framework can effectively protect NVS events against grayscale image reconstruction and human identification, and meanwhile, achieve high efficiency on various platforms including resource-constrained devices.

Highlights

  • Nowadays, vision sensors have been widely deployed and playing important roles in a variety of computer-vision-based applications

  • These approaches can be used to generate visible images from the Neuromorphic vision sensor (NVS) output, and all security issues related to traditional images and videos remain existing and applicable to NVS applications, which are worth revisiting and investigating

  • The accuracy was employed as the metric to quantitatively validate how the encryption algorithm prevented the deep learning-based identification attacks, including an event frame-based convolutional neural network (CNN) and a sparse event-based graph convolutional network (GCN)

Read more

Summary

Introduction

Vision sensors have been widely deployed and playing important roles in a variety of computer-vision-based applications. Traditional RGB cameras have been widely applied in various scenarios, such as video surveillance, face recognition and aided driving. Prior work assumed that NVS is secure and privacypreserving because it does not produce visible images, and directly applied NVS in privacy-related scenarios. The occurrence of grayscale image reconstruction approaches [12,13,14] seriously threatens the security of NVS. These approaches can be used to generate visible images from the NVS output, and all security issues related to traditional images and videos remain existing and applicable to NVS applications, which are worth revisiting and investigating

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.