Abstract

The human visual system can recognize a person based on his physical appearance, even if extreme spatio-temporal variations exist. However, the surveillance system deployed so far fails to re-identify the individual when it travels through the non-overlapping camera’s field-of-view. Person re-identification (Re-ID) is the task of associating individuals across disjoint camera views. In this paper, we propose a robust feature extraction model named Discriminative Local Features of Overlapping Stripes (DLFOS) that can associate corresponding actual individuals in the disjoint visual surveillance system. The proposed DLFOS model accumulates the discriminative features from the local patch of each overlapping strip of the pedestrian appearance. The concatenation of histogram of oriented gradients, Gaussian of color, and the magnitude operator of CJLBP bring robustness in the final feature vector. The experimental results show that our proposed feature extraction model achieves rank@1 matching rate of 47.18% on VIPeR, 64.4% on CAVIAR4REID, and 62.68% on Market1501, outperforming the recently reported models from the literature and validating the advantage of the proposed model.

Highlights

  • Surveillance cameras are mounted in critical geographical locations to ensure public safety and security

  • The proposed Discriminative Local Features of Overlapping Stripes (DLFOS) model in combination with XQDA and SRID is evaluated on viewpoint invariant pedestrian recognition (VIPeR), CAVIARR4REID, and the Market1501 database, and the results have been summarized in the shape of the CMC curve and the table as follows

  • The proposed DLFOS model is evaluated on the VIPeR dataset, with numerous combinations of metric learning methods, and the results have been shown in Figure 3 and Table 1

Read more

Summary

Introduction

Surveillance cameras are mounted in critical geographical locations to ensure public safety and security. The viewpoint, object’s scale, and illumination of the scene vary due to the camera mount position, depth, and light source of the surveillance environment [6,7]. Scale variation that arises due to the continuous change in the camera depth is the origin of the difference in the resolution of the probe and gallery sample of the same candidate [10]. The biometric features are highly sensitive to view angle, scale, illumination, and orientation of the person; the overall appearance is used for the case of. The appearance variability due to many sources, such as scale, viewpoint, pose, illumination, and occlusions in the non-overlapping field-of-view of the camera network, degrades the performance of an automatic Re-ID system [19]. The researchers so far have developed techniques that are robust to the geometric and photometric variations in the person’s appearance. We propose a robust feature extraction technique that will provide better rank 1 matching accuracy on many commonly Re-ID datasets both in single-shot and multi-shot approaches

Related Work
Proposed Re-ID Model
Dataset
CAVIAR4REID
Market1501
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.