Abstract
Infrared-Visible person re-identification is a kind of cross-modality person re-identification. The purpose of the task is that given a person image we need to find another image on the same person from gallery. The query images and gallery images are not only in RGB modality but in infrared modality as well. The cross-modality person ReID task can deal with the limitation of single modality because we usually can get images in more than one modality. In our work, we take advantage of both global feature and local feature. We use a dual-path structure to extract features from RGB images and infrared images respectively. Besides, we add the LSTM structure in each path to learn the serialized local features. The loss function consists of cross-entropy loss and hetero-center loss so that the model can bridge the cross-modality and intra-modality gaps to capture the modality-shared features and improve the cross-modality similarity. Finally, we do experiments on two datasets including SYSU-MM01 and RegDB, then compare with other methods in recent studies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.