Abstract

In this paper, a data assimilation network is proposed to tackle the challenges of domain generalization for person re-identification (ReID). Most of the existing research efforts only focus on single-dataset issues, and the trained models are difficult to generalize to unseen scenarios. This paper presents a distinctive idea to improve the generality of the model by assimilating three types of images: style-variant images, misaligned images and unlabeled images. The latter two are often ignored in the previous domain generalization ReID studies. In this paper, a non-local convolutional block attention module is designed for assimilating the misaligned images, and an attention adversary network is introduced to correct it. A progressive augmented memory is designed for assimilating the unlabeled images by progressive learning. Moreover, we propose an attention adversary difference loss for attention correction, and a labeling-guide discriminative embedding loss for progressive learning. Rather than designing a specific feature extractor that is robust to style shift as in most previous domain generalization work, we propose a data assimilation meta-learning procedure to train the proposed network, so that it learns to assimilate style-variant images. It is worth mentioning that we add an unlabeled augmented dataset to the source domain to tackle the domain generalization ReID tasks. Extensive experiments demonstrate that our approach significantly outperforms the state-of-the-art domain generalization methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.