Abstract

Person re-identification (re-ID) is an important topic in computer vision. We study the one-example re-ID task, where each identity has only one labeled example along with many unlabeled examples. In practice, for the unlabeled data, it is difficult to differentiate each person because of many conditions, such as low resolutions, occlusions, and lighting. In previous works, fine-grained information has been proven to be useful for supervised re-ID. To solve choosing a reliable, easy sample for self-paced learning, we exploit fine-grained features to metric the distances between labeled data and unlabeled data, the combination strategy of global and overlapping-part distance selecting more positive data for model training. In addition, the attention mechanism has been introduced to suppress the interruption of background. The training data are split into three parts, i.e., labeled data, pseudolabeled data, and instance-labeled data. First, the model is initialized by one-shot data for each identity. Then, pseudolabels are estimated from the unlabeled data and updating the model iteratively. The self-paced progressive sampling method is adopted to increase the number of the selected pseudolabeled candidates step by step. Notably, with pretrained model, on Market-1501, the rank-1 accuracy of our method is 86.0% which exceeds most other methods, experiment on two image-based datasets demonstrate promising results under one example re-ID setting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.