Abstract

Video-based person re-identification (re-id) has attracted a lot of research interest. When facing dramatic growth in new pedestrian videos, existing video-based person re-id methods usually need large quantities of labeled pedestrian videos to train a discriminative model. In practice, labeling large quantities of pedestrian videos is a costly and time-consuming task, which will limit the application of these methods in the real environment. Therefore, it is valuable and necessary to investigate how to learn a discriminative re-id model by using limited labeled training pedestrian videos. In this paper, we propose a semi-supervised cross-view projection-based dictionary learning (SCPDL) approach for video-based person re-id. Specifically, SCPDL jointly learns a pair of feature projection matrices and a pair of dictionaries by integrating the information contained in labeled and unlabeled pedestrian videos. With the learned feature projection matrices, the influence of variations within each video to the re-id can be reduced. With the learned dictionary pair, pedestrian videos from two different cameras can be converted into coding coefficients in a common representation space, such that the differences between different cameras can be bridged. In the learning process, the labeled pedestrian videos are used to ensure that the learned dictionaries have favorable discriminability; the large quantities of unlabeled pedestrian videos are used to ensure that SCPDL can better capture the variations between pedestrian videos, such that the learned dictionaries can own stronger representative capability. Experiments on two public pedestrian sequence data sets (iLIDS-VID and PRID 2011) demonstrate the effectiveness of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.