Abstract

Visible–infrared (VI) person re-identification (Re-ID) is a critical identification task that involves retrieving and matching images of an individual using both infrared and visible imaging modalities. To improve the performance, researchers have developed methods to obtain implicit feature information; however, this degrades with fewer discriminative features. To address this issue, we propose a weighted fused cross-attention multi-scale residual vision transformer (WF-CAMReViT) approach to re-identify the appropriate person from visible–infrared modality images by integrating the cross-attention multi-scale residual vision transformer architecture with Opposition-based Dove Swarm Optimization (ODSO). The proposed framework aims to bridge the domain gap between the visible and infrared modalities and significantly improve the re-identification performance. RGB (visible) and infrared (IR) images of persons are gathered from standard datasets, subjected to a cross-attention multi-scale residual vision transformer network to collect features, and then fuse using minimal weight. We also propose Opposition-based DSO to find the minimal weight. The weighted fused features are then subjected to the final decoder layer of CAMReViT to perceive the characteristics of each modality. In this study, model-aware enhancement (MAE) loss is develop to improve the modality information capacity of modality-shared features. Then, the experimental results on the SYSU-MM01 and RegDB datasets are compared with state-of-the-art transformer-based visible–infrared person Re-ID tasks to verify the efficacy of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call