Abstract

Person re-identification aims to identify the same pedestrians across different camera views at different locations. This important yet difficult intelligent video analysis problem remains a vigorous area of research due to demands for performance improvements. Person re-identification involves two main steps: feature representation and metric learning. Handcrafted features, such as color and texture histograms, are frequently used for person re-identification, but most handcrafted features are limited by not being directly applicable to practical problems. Deep learning methods have obtained the state-of-the-art performance in a wide variety of applications, including image annotation, face recognition, and speech recognition. However, deep learning features are heavily dependent on large-scale labeling of samples. In this paper, by utilizing the Cross-view Quadratic Discriminant Analysis (XQDA) metric learning, we propose a novel scheme called deep multi-view feature learning (DMVFL), which exploits the collaboration between handcrafted and deep learning features in a simple but effective way. Furthermore, we prove that the XQDA is a robust algorithm. Extensive experiments on two challenging person re-identification data sets (VIPeR and GRID) demonstrate that DMVFL improves on current state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call