Abstract

Person re-identification has become a hot research topic due to its importance in surveillance and forensics applications. The purpose of person re-identification is to find the same person from disjoint camera views at different time. Most of the existing methods try to identify the person by measuring the similarity of two still images from different camera views, which only uses intraimage features such as color, shape and texture. In this paper, we propose a person re-identification architecture which analyzes a sequence pair while not an image pair, so not only intra-image features but also the gait feature is also considered. In contrast to existing works that use handcrafted features, our method automatically learns spatio-temporal features optimal for the person re-identification task with a deep convolutional network. To learn a discriminant metric, we use a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA).The process of XQDA is also called transfer learning. Experiments show that our method significantly outperforms the state of the art on both a large dataset (CUHK03) and a medium-sized dataset (CUHK01). We also get a better performance on a small dataset (VIPeR) with a pre-trained network without fine-tuning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call