Abstract

Most person re-identification methods are researched under various assumptions. However, viewpoint variations or occlusions are often encountered in practical scenarios. These are prone to intra-class variance. In this paper, we propose a multiscale global-aware channel attention (MGCA) model to solve this problem. It imitates the process of human visual perception, which tends to observe things from coarse to fine. The core of our approach is a multiscale structure containing two key elements: the global-aware channel attention (GCA) module for capturing the global structural information and the adaptive selection feature fusion (ASFF) module for highlighting discriminative features. Moreover, we introduce a bidirectional guided pairwise metric triplet (BPM) loss to reduce the effect of outliers. Extensive experiments on Market-1501, DukeMTMC-reID, and MSMT17, and achieve the state-of-the-art results on mAP. Especially, our approach exceeds the current best method by 2.0% on the most challenging MSMT17 dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.