Abstract

Person re-identification (ReID)-a problem of associating the images of the same person from different camera views, can be divided into two categories of multi-shot and single-shot depending on the number of images used for person representation. Numerous methods have been proposed for person ReID. However, these studies focus either on feature extraction for person representation or metric learning for person matching. Taking into account that each feature has its own representation power for each individual, in this paper, we focus on improving person ReID performance through feature fusion. For this, first we formulate multi-shot person ReID as an information retrieval problem where each person in probe is considered as a query person. Inspired the idea of query-adaptive late fusion proposed for image retrieval in [1], we propose two adaptive late fusion schemes for multi-shot person ReID. To show the robustness of the fusion schemes, we integrate them in a multi-shot person ReID framework where both hand-crafted and deep-learned features are extracted. Experimental results obtained on two benchmark datasets of PRID-2011 and iLIDS-VID show that the matching rates at rank-1 increased up to 5.65% and 14.13% on PRID-2011 and iLIDS-VID, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call