Abstract

Unsupervised person re-identification has been improved significantly by the development of cross domain person re-identification models, which apply useful knowledge in source data to completely unlabeled target data. However, existing cross domain re-identification models still remain a major limitation that they are all based on single-source and single-target setting. The only one source domain may remain a tremendous gap between target, generating negative effect for the model training in target domain. To overcome this drawback, this paper proposes a Multi-Source Transfer Network to learn a shared target-biased feature space between multi-source and target domains, which achieves transfer learning in feature-level, pixel-level, and task-level by the proposed target-biased multi-source transfer learning module, relativistic adversarial learning module, and task-gap bridging module, respectively. Through leveraging the domain gaps in feature-level, pixel-level, and task-level, this network can synthetically learn a discriminative model from multiple source domains to effectively conduct re-identification in target domain. Furthermore, this paper conducts extensive experiments on three widely-recognized person re-identification datasets, and the proposed network achieves rank-1 accuracies of 80.9% and 74.6% on DukeMTMC-reID and Market-1501 datasets, respectively. The results demonstrate the contribution of the proposed method, compared with state-of-the-art methods, including hand-crated feature, clustering and transfer learning based methods.

Highlights

  • Person Re-identification is a complex image retrieve task, aiming to find the target person in multiple camera views without any overlapping areas

  • Domain adaptation is absorbed in establishing a knowledge-transferable discriminative model from labeled source datasets to adjust the application of unlabeled target domain, which is already employed in person re-identification [13], [19], [21]

  • In order to relax the limitation of single-source to single-target transfer learning in cross-domain person re-identification models, this paper proposes a Multi-Source Transfer Network (MSTNet), which can exploit synthetic discriminative information from multiple labeled source domains in different levels to learn a robust target-biased feature space for unlabeled target data

Read more

Summary

Introduction

Person Re-identification (re-id) is a complex image retrieve task, aiming to find the target person in multiple camera views without any overlapping areas This topic has attracted large amounts of research attention in recent years on account of its important applications in automatically video analysis for public security [16], [17], [25], [27]. That results in too terrible performance when these methods are introduced to unseen domains, according to the illustration in [26] In another direction, domain adaptation is absorbed in establishing a knowledge-transferable discriminative model from labeled source datasets to adjust the application of unlabeled target domain, which is already employed in person re-identification [13], [19], [21]. In these cross domain person re-identification approaches, the most challenging task is to alleviate the domain gap between source

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call