Abstract

Person re-identification which aims to match people across non-overlapping cameras has become an important research topic due to the increasing demand in many important applications such as video surveillance and security monitoring. Matching people in different cameras is a challenging task since the appearance of the same subject may change dramatically in different views due to variations in pose, lighting condition, etc. In order to reduce the feature discrepancy caused by view change, most of the existing methods focus either on robust feature extraction or view-invariant feature transformation. During matching, a subject to be identified, i.e., a probe, is compared with each subject in the gallery with known identities. The returned ranked list is generated based on the similarity scores. However, such a matching process only considers pairwise similarity between the probe and a gallery subject while higher order relationships between the probe and the gallery or even among the gallery subjects are ignored. To address this issue, we propose a hypergraph-based matching scheme in which both pairwise and higher order relationships for the probe and gallery subjects are discovered through hypergraph learning. In this way, improved similarity scores are obtained as compared to the conventional pairwise similarity measure. We conduct experiments on two widely used person re-identification datasets and the results demonstrate that matching through hypergraph learning leads to superior performance compared with state-of-the-arts. Furthermore, the proposed approach can be easily incorporated into any existing approach where similarities between probe and gallery are to be computed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call