Abstract
The one-shot person Re-ID scenario faces two kinds of uncertainties when constructing the prediction model from X to Y. The first is model uncertainty, which captures the noise of the parameters in DNNs due to a lack of training data. The second is data uncertainty, which can be divided into two subtypes: image noise, where severe occlusion and the complex background contain irrelevant information about the identity; and label noise, where mislabeling affects visual appearance learning. We find that the state-of-the-art one-shot person Re-ID addresses the first issue of model uncertainty via a dynamic sampling strategy, while the second issue of data uncertainty remains. In this paper, to simultaneously address both issues, we propose a novel SPUE-Net for one-shot person Re-ID. By introducing a self-paced sampling strategy, our method can estimate the pseudolabels of unlabeled samples iteratively to expand the labeled samples gradually and remove model uncertainty without extra supervision. We divide the pseudolabel samples into two subsets to make the use of training samples more reasonable and effective. In addition, we apply a co-operative learning method of local uncertainty estimation combined with determinacy estimation to achieve better-hidden space feature mining and to improve the precision of selected pseudolabeled samples, which reduces data uncertainty. Extensive comparative evaluation experiments on video-based and image-based datasets show that SPUE-Net has significant advantages over state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.