Abstract

Owing to factors such as pose change, illumination condition, background clutter, and occlusion, person re-identification (re-ID) based on video frames is a challenging task. To utilize pixel-level saliency information and discriminative local body information of the image and improve re-ID accuracy in the case of complex pose change and viewpoint difference, a person re-ID network based on attention mechanism and adaptive weight was proposed in this study. Based on the detection of human key points, an attention mechanism was integrated to screen the discriminative information in various parts of the human body. The adaptive weighting method was adopted in the network, providing the extracted local features different weights according to the discriminative information of different human parts. The re-ID accuracy of the network model was verified by experiments. Results demonstrate that the proposed network model can accurately extract the features of discriminative regions in various parts of the human body by integrating the attention mechanism and adaptive region weight, thereby improving the performance of person re-ID. Our method is compared with current widely used person re-ID network models as AACN and HAC. On the Market-1501 dataset, the Rank-1 and mAP values are improved by 4.79% and 2.78% as well as 8% and 3.52%, respectively, and on the DukeMTMC-reID dataset, by 4.92% and 3.26% as well as 5.17% and 3.17%, respectively. Compared with the previous GLAD network model, Rank-1 and mAP values on two experimental datasets are increased by more than 2%. The proposed method provides a good approach to optimize the descriptor of pedestrians for person re-ID in complex environments. Keywords: Person re-identification, Adaptive weight, Attention mechanism, Convolutional neural network

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.