Abstract

Domain generalization (DG) in person re-identification (ReID) is an extremely challenging but essential task, which aims to learn a generalizable model over multiple labeled source domains that can perform well on unseen target domains. Most existing DG strategies in ReID directly aggregate multiple source data together for training, incurring a large inter-domain bias and unstable model optimization that lead the model apt to overfitting domain bias and the model training more time-consuming respectively, thus hampering the generalization and convergence speed of the model. To tackle the aforementioned issues, inspired by Curriculum Learning that mimics the process of human lifelong DG learning (from easy to hard), we put forward a novel Debiased Contrastive Curriculum Learning (DCCL) strategy for DG ReID, which is designed to incrementally enhance generalization in an easy-to-hard training way that can continuously accumulate learning experience to make learning in unknown domains easier and effectively eliminate the domain bias to help the model learn rich domain-invariant discriminative features, thereby strengthening generalization and accelerating convergence for the model. In addition, to simultaneously learn class-level and instance-level discriminative representations, we raise a non-parametric hybrid contrastive loss to equip the DCCL model. We also particularly design an inter-domain mix module to variegate the features of the newly added source domain at each stage of DCCL, further establishing the advantages of DCCL. Extensive experimental results on four public ReID benchmarks fully demonstrate that our DCCL can effectively strengthen the generalization capacity of the model to unseen domains and outperform the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call