Abstract

Person re-identification is an image retrieval task, and its task is to perform a person matching in different cameras by a given person target. This research has been noticed and studied by more and more people. However, pose changes and occlusions often occur during a person walking. Especially in the most related methods, local features are not used to simply and effectively solve the problems of occlusion and pose changes. Moreover, the metric loss functions only consider the image-level case, and it cannot adjust the distance between local features well. To tackle the above problems, a novel person re-identification scheme is proposed. Through experiments, we found that we paid more attention to different parts of a person when we look at him from a horizontal or vertical perspective respectively. First, in order to solve the problem of occlusion and pose changes, we propose a Cross Attention Module (CAM). It enables the network to generate a cross attention map and improve the accuracy of person re-identification via the enhancement of the most significant local features of persons. The horizontal and vertical attention vectors of the feature maps are extracted and a cross attention map is generated, and the local key features are enhanced by this attention map. Second, in order to solve the problem of the lack of expression ability of the single-level feature maps, we propose a Multi-Level Feature Complementation Module (MLFCM). In this module, the missing information of high-level features is complemented by low-level features via short skip. Feature selection is also performed among deep features maps. The purpose of this module is to get the feature maps with complete information. Further, this module solves the problem of missing contour features in high-level semantic features. Third, in order to solve the problem that the current metric loss function cannot adjust the distance between local features, we propose Part Triple Loss Function (PTLF). It can reduce both within-class and increase between-class distance of the person parts. Experimental results show that our model achieves high values on Rank-k and mAP on Market-1501, Duke-MTMC and CUHK03-NP.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.