Abstract

Recently, unsupervised person re-identification (re-ID) methods based on pseudo-label generation have achieved impressive performance. Due to the lack of pedestrian labels and the problem of label noise in clustering, huge discrepancies across cameras cause the model to fail to learn discriminative features. In order to suppress label noise, many schemes try to correct the feature learning of the target domain with the help of labeled source domain data. However, because of domain gap, the discrimination of source domain is difficult to be transferred directly to the target domain. To overcome this problem, we propose a Sub Domain Specified Batch Normalization (SDSBN) to force the images of all cameras coming from two domains to fall onto the same subspace, which effectively align the distribution between all camera-level data of the two domains, thus person classifiers learned in the source domain can transfer well to the target domain. We further propose a Reference Space (RS) in which the source domain person category features act as experts to determine whether the target domain person images have certain specified visual semantic characteristics by which to correct the labelling errors caused by the target domain clustering. In addition, because of the coarse-grained holistic appearance of pedestrians is not conducive for difficult sample mining, we propose to utilize offline pose detector to crop the person images into different body parts, and then do similarity learning to pull neighbours together respectively, according to them to implement fine-grained feature learning. Thus establish our multi-granularity unsupervised re-identification framework with two branches for global feature representations and local feature representations respectively. Extensive experiments validate the superiority of our method for unsupervised person reidentification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.