Abstract

Existing methods of cross-modality person re-identification usually use set-level global feature constraints to reduce the cross-modality discrepancy. However they often ignore the modality-specific and modality-shared local feature matching. Modality-specific local matching could help extract discriminative identity-consistent features and alleviate spatial misalignment, and modality-shared local matching could make identity-consistent features be correlated in two modalities and mitigate the modality misalignment. Therefore, we establish the intra-modality and inter-modality co-occurrence relation between identity parts. We observe the modality-specific and modality-shared local feature matching of identity parts as intra-modality and inter-modality low-rank relation-finding problems. Therefore, in this work, we propose low-rank local matching (LLM) approach to establish the intra-modality and inter-modality co-occurrence relation between identity parts. First, in order to reinforce identity-consist features in two modalities and correlate these cross-modality identity-consist features together, the local matching (LM) module is designed to estimate partial co-occurrence probability by measuring local feature similarity. Moreover, we propose cross-modality triplet-center loss (CTLoss) which adds global constraint for each class distribution in embedding space to alleviate dramatic data expansion due to modality variance. Extensive experiments on two datasets demonstrate the superior performance of our approach over the existing state-of-the-art. Our code is released on https://github.com/fegnyujian/LLM .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call