Abstract

Video person re-identification (re-ID) methods extract richer features from video tracklets than image-based ones and have received growing attention. However, existing supervised methods require numerous cross-camera identity labels, which is impractical for large-scale data. Although clustering-based unsupervised methods have been exploited to obtain pseudo labels and train the models iteratively for video person re-ID, they remain in their infancy due to the diversity of person images and uncertainty in the image quality of video tracklets. In this work, we employ two strategies of <u>S</u>ampling and <u>R</u>e-weighting for <u>C</u>lustering (SRC) to obtain robust and discriminative person feature representations. This method considers the influence of two kinds of frames in the tracklet: 1)&#x00A0;Detection errors or heavy occlusions generate noisy frames in the tracklet. These tracklets with noisy frames may be assigned with unreliable annotations during clustering. 2)&#x00A0;Different frames are identified by the model with varying degrees of difficulty, caused by pose changes or partial occlusions. We call them hard frames, which are hard to identify but informative. To alleviate these problems, we propose a dynamic noise trimming module and diverse frame re-weighting module for sampling and re-weighting. The dynamic noise trimming module strengthens the dependability of the tracklet representation by removing noisy frames to enhance the clustering accuracy. The diverse frame re-weighting module focuses on training hard frames to enhance the learning of rich information from tracklet. Experiments on three video datasets, <i>i.e.</i> DukeMTMC-VideoReID, MARS and PRID2011, demonstrate the effectiveness of the proposed SRC under the unsupervised re-ID setting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.