Abstract

Person reidentification (re-id) aims to recognize a specific pedestrian from uncrossed surveillance camera views. Most re-id methods perform the retrieval task by comparing the similarity of pedestrian features extracted from deep learning models. Therefore, learning a discriminative feature is critical for person reidentification. Many works supervise the model learning with one or more loss functions to obtain the discriminability of features. Softmax loss is one of the widely used loss functions in re-id. However, traditional softmax loss inherently focuses on the feature separability and fails to consider the compactness of within-class features. To further improve the accuracy of re-id, many efforts are conducted to shrink within-class discrepancy as well as between-class similarity. In this paper, we propose a circle-based ratio loss for person re-identification. Concretely, we normalize the learned features and classification weights to map these vectors in the hypersphere. Then we take the ratio of the maximal intraclass distance and the minimal interclass distance as an objective loss, so the between-class separability and within-class compactness can be optimized simultaneously during the training stage. Finally, with the joint training of an improved softmax loss and the ratio loss, the deep model could mine discriminative pedestrian information and learn robust features for the re-id task. Comprehensive experiments on three re-id benchmark datasets are carried out to illustrate the effectiveness of the proposed method. Specially, 83.12% mAP on Market-1501, 71.66% mAP on DukeMTMC-reID, and 66.26%/63.24% mAP on CUHK03 labeled/detected are achieved, respectively.

Highlights

  • Person reidentification aims to retrieve the person-of-interest among nonoverlapping camera views according to the given person image

  • Motivated by Linear Discriminant Analysis (LDA) which seeks for a new subspace where samples have the largest interclass distance and the smallest intraclass distance by optimizing the ratio of these two distances, we take the ratio of the maximal intraclass distance and the minimal interclass distance as a constraint objective in the re-id task

  • We remove the last fully connected (FC) layer from the training network to obtain the feature extractor for the person reidentification task. e testing images are resized to 288 × 144 before they are fed to the feature extractor

Read more

Summary

Introduction

Person reidentification aims to retrieve the person-of-interest among nonoverlapping camera views according to the given person image. Due to the limitation of work environments and camera devices, the captured images usually have vast differences in illuminations, occlusions, person postures, camera views, etc. Traditional re-id approaches tackle the aforementioned problems mainly with manual feature representation [1, 2] and metric learning [3, 4] methods. With the rapid development of neural networks and the popularization of largescale re-id datasets in recent years, the deep learning based methods have been widely applied in person reidentification and obtained remarkable performance. The deep learning based approaches can integrate the feature learning and metric learning in an end-to-end framework. The deep learning approaches have dominated the research trends of person reidentification

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.