Abstract

Person re-identification is widely regarded as an image retrieval problem. Given a pedestrian-of-interest (query) image in one camera, the person re-identification system aims to identify the pictures of the same person from an image pool (gallery). Due to the differences in camera pixels, pose, illumination, occlusion, and intra-class variations across different cameras, the task of person re-identification remained challenging to the community of computer vision scientists. In this paper, we propose a multi-task network, based on a uniform partition network, which computes the identification loss and verification loss of two input images simultaneously. Given a pair of images as input, the system predicts the identities of the two input images and outputs a similarity score at the same time, to indicate whether they belong to the same identity or not. To get more fine-grained part-level features, we adopted the part-based convolutional baseline network for feature extraction of each input image and output a convolutional descriptor consisting of six local features. Our model achieved 81.19% mAP and 93.34% rank-1 accuracy on Market-1501 datasets. It also achieved 72.12% mAP and 85.59% rank-1 accuracy on DukeMTMC-reID. Comparing them with those of state-of-the-art, our model outperformed the state-of-the-art by a margin of 3.79% mAP, 1.03% rank-1, and 6.02% mAP, 3.79% rank-1 on Market-1501 and DukeMTMC-reID, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.