Abstract

Clothing change person re-identification (CC-ReID) aims to match images of the same person wearing different clothes across diverse scenes. Leveraging biological features or clothing labels, existing CC-ReID methods have demonstrated promising performance. However, current research primarily focuses on supervised CC-ReID methods, which require a substantial number of manually annotated labels. To tackle this challenge, we propose a novel clothing-invariant contrastive learning (CICL) framework for unsupervised CC-ReID task. Firstly, to obtain clothing change positive pairs at a low computational cost, we propose a random clothing augmentation (RCA) method. RCA initially partitions clothing regions based on parsing images, then applies random augmentation to different clothing regions, ultimately generating clothing change positive pairs to facilitate clothing-invariant learning. Secondly, to generate pseudo-labels strongly correlated with identity in an unsupervised manner, we design semantic fusion clustering (SFC), which enhances identity-related information through semantic fusion. Additionally, we develop a semantic alignment contrastive loss (SAC loss) to encourages the model to learn features strongly correlated with identity and enhances the model’s robustness to clothing changes. Unlike existing optimization methods that forcibly bring closer clusters with different pseudo-labels, SAC loss aligns the clustering results of real image features with those generated by SFC, forming a mutually reinforcing scheme with SFC. Experimental results on multiple CC-ReID datasets demonstrate that the proposed CICL not only outperforms existing unsupervised methods but can even achieves competitive performance with supervised CC-ReID methods. Code is made available at https://github.com/zqpang/CICL.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call