Abstract

Feature extraction for visible–infrared person re-identification (VI-ReID) is challenging because of the cross-modality discrepancy in the images taken by different spectral cameras. Most of the existing VI-ReID methods often ignore the potential relationship between features. In this paper, we intend to transform low-order person features into high-order graph features, and make full use of the hidden information between person features. Therefore, we propose a multi-hop attention graph convolution network (MAGC) to extract robust person joint feature information using residual attention mechanism while reducing the impact of environmental noise. The transfer of higher order graph features within MAGC enables the network to learn the hidden relationship between features. We also introduce the self-attention semantic perception layer (SSPL) which can adaptively select more discriminant features to further promote the transmission of useful information. The experiments on VI-ReID datasets demonstrate its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call