Abstract
Person re-identification across a network of cameras, with disjoint views, has been studied extensively due to its importance in wide-area video surveillance. This is a challenging task due to several reasons including changes in illumination and target appearance, and variations in camera viewpoint and camera intrinsic parameters. The approaches developed to re-identify a person across different camera views need to address these challenges. More recently, neural network-based methods have been proposed to solve the person re-identification problem across different camera views, achieving state-of-the-art performance. In this paper, we present an effective and generalizable attack model that generates adversarial images of people, and results in very significant drop in the performance of the existing state-of-the-art person re-identification models. The results demonstrate the extreme vulnerability of the existing models to adversarial examples, and draw attention to the potential security risks that might arise due to this in video surveillance. Our proposed attack is developed by decreasing the dispersion of the internal feature map of a neural network to degrade the performance of several different state-of-the-art person re-identification models. We also compare our proposed attack with other state-of-the-art attack models on different person re-identification approaches, and by using four different commonly used benchmark datasets. The experimental results show that our proposed attack outperforms the state-of-art attack models on the best performing person re-identification approaches by a large margin, and results in the most drop in the mean average precision values.
Highlights
In order to continuously track targets across multiple cameras with disjoint views, it is essential to re-identify the same target across different cameras
We demonstrate the effectiveness of an attack model in generating adversarial examples (AEs) for the person ReID application, attack multiple state-of-theart person ReID models, and compare the performance of the presented attack approach with other state-of-the-art attack models via an extensive set of experiments on various person ReID benchmark datasets
One of our goals is to demonstrate the extreme vulnerability of multiple state-ofthe-art person ReID approaches to this attack, and draw the attention of the research community to the existing security risks
Summary
In order to continuously track targets across multiple cameras with disjoint views, it is essential to re-identify the same target across different cameras This is a very challenging task due to several reasons including changes in illumination and target appearance, and variations in camera intrinsic parameters and viewpoint. One of our goals is to demonstrate the extreme vulnerability of multiple state-ofthe-art person ReID approaches to this attack, and draw the attention of the research community to the existing security risks. Many works relied on color transformation and statistic models for person re-identification. Datta et al [45] presented the Weighted-BTF (wBTF), and Bhuiyan et al [46] presented the Minimum Multiple Brightness Transfer Function (Min-MCBTF) to model the appearance variation by using a learning approach. It was assumed that multiple consecutive images are available for training, which is not the case for the commonly used benchmark datasets
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.