Abstract

The extensive progress of Re-ID has been obtained in the visible modality. Because the security monitoring system automatically switches from visible modality to infrared modality in the dark situation, the research of the infrared-visible modality-based cross-modal person Re-ID (IV-Re-ID) task increases much attention. However, the existing heterogeneous physical properties result in a substantial semantic gap between the visible and infrared modality data, further increasing the challenge of IV-Re-ID. This paper proposes a Cross-modal Channel Exchange Network (CmCEN) for the task of IV-Re-ID. First, a non-local attention mechanism and a semi-weight share mechanism are utilized in the backbone network to enhance the discriminative capability of both local and global data representations. Then, a channel exchange module is designed to measure each channel’s significance effectively, and the useless channels could be replaced by the critical channels from the other modality data. Finally, a discriminator with adversarial loss is designed in the generative adversarial module. It can generate a similar distribution of two different modality images with the same person identity and a different distribution of two different modality images with different identities under the supervision of the adversarial loss to further improve the robustness of the learned latent feature space. The evaluation results on two cross-modal datasets demonstrated that the CmCEN achieves competitively and even higher performance comparing the SOTA models in terms of accuracy and mAP of IV-Re-ID.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.