Abstract

AbstractMost existing unsupervised person re-identification (Re-ID) methods primarily depend on the cluster distance, and merely exploit the available source labeled data to assign pseudo labels for the unannotated data. Whereas, the cluster distance usually fails to adapt to different datasets due to the domain gap. Besides, learning exclusively from the source data can not generate accurate pseudo labels for the lack of the target data information. To address this problem, we propose to exploit the spatial-temporal constraints to facilitate the pseudo label generation process. Specifically, graphs for the labeled source data are constructed and the graph convolution network (GCN) is used to learn graph embeddings. Based on these graph embeddings, the likelihood of linkages between graph nodes is estimated and utilized to assign pseudo labels for the unlabeled data. Then, with the pseudo labels, a smoothed spatial-temporal probability distribution model is generated to amend the likelihood of linkages between graph nodes as well as correct the visual similarity scores for person Re-ID. Finally, we optimize the pseudo label assignment, feature extraction networks, and spatial-temporal model alternatively and iteratively to improve the person Re-ID performance. Comprehensive experiments demonstrate that the proposed method outperforms state-of-the-art methods.KeywordsPerson re-identificationUnsupervised learningGraph convolutional networkPseudo-labelingSpatial-temporal constraints

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.