Abstract

Cross-resolution person re-identification is a challenging problem in the field of person re-identification. In order to solve the problem of resolution mismatch, many studies introduce super-resolution into person re-identification tasks. In this work, we propose a cross-resolution person re-identification method based on double transformer residual super-resolution network (DTRSR), which mainly includes super-resolution module and person re-identification module. In the super-resolution module, we propose the double transformer network as our attention module. First of all, we divide the features extracted by the residual network. Then calculate the similarity between each local feature and the global feature after average pooling and maximum pooling respectively, which makes our module quickly capture the hidden weight information in the spatial domain. In the person re-identification module, we propose an effective fusion method based on key point features (KPFF). The key point extraction model can not only solve the problem that local features can not be accurately aligned, but also remove the interference of background noise. In order to fully mine the relationship between the features of each key point, we calculate the two-way correlation between each key point feature and other features, and then superimpose the two-way correlation with the feature itself to get the superposition feature which contains global and local information. The effectiveness of this method is proved by extensive experiments. Compared with the most advanced methods, the test results in the three datasets show that our method improves rank-1 by 1.1%, 3.5% and 1.7%; and rank-5 by 1.3%, 1.7% and 0.3%; and rank-10 by 0.1%, 0.4% and 0.1%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.