Abstract

Person Re-Identification (ReID) has witnessed tremendous improvements with the help of deep convolutional neural networks (CNN). Nevertheless, because different fields have their characteristics, most existing methods encounter the problem of poor generalization ability to invisible people. To address this problem, based on the relationship between the temporal and camera position, we propose a robust and effective training strategy named temporal smoothing dynamic re-weighting and cross-camera learning (TSDRC). It uses robust and effective algorithms to transfer valuable knowledge of existing labeled source domains to unlabeled target domains. In the target domain training stage, TSDRC iteratively clusters the samples into several centers and dynamically re-weights unlabeled samples from each center with a temporal smoothing score. Then, cross-camera triplet loss is proposed to fine-tune the source domain model. Particularly, to improve the discernibility of CNN models in the source domain, generally shared person attributes and margin-based softmax loss are adapted to train the source model. In terms of the unlabeled target domain, the samples are clustered into several centers iteratively and the unlabeled samples are dynamically re-weighted from each center. Then, cross-camera triplet loss is proposed to fine-tune the source domain model. Comprehensive experiments on the Market-1501 and DukeMTMC-reID datasets demonstrate that the proposed method vastly improves the performance of unsupervised domain adaptability.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.