Abstract

Current approaches in RGB-Infrared Person Re-Identification (RGB-IR ReID) often utilize two parameter-specific subnetworks to extract modality-specific features from RGB and IR images, followed by a shared-parameter subnetwork that identifies common features across modalities to facilitate the matching of individuals. Nevertheless, the aforementioned methods ignore the complementary relationships between modality-specific and modality-common features. The exploration of discriminative person-related modality-common features may not be comprehensive, leading to sub-optimal results. To address the above issue, we design a novel multi-level modality-specific and modality-common features fusion network (3MFFNet). Specifically, in 3MFFNet, we design a multi-granularity feature fusion module (MGFFM) to explore those complementary person-related information between modality-specific features and modality-common features. Meanwhile, we combine a multi-granularity feature extraction strategy with an attention mechanism for selecting those discriminative person-related modality-specific features and facilitating the fusion process. This method can make the RGB-IR network obtain a more robust and discriminative person representation. Through extensive experiments conducted on two public datasets, our approach demonstrates superior performance compared to existing state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.