Abstract
Powerful local features can be extracted from multiple body regions of a pedestrian. Early person re-identification research has focused on extracting local features by locating regions with specific pre-defined semantics, which is not effective and increases the complexity of the network. In this paper, we propose a multiple granularity person re-identification network based on representation learning and metric learning for learning discriminative representations of pedestrian images. Multiple granularity person re-identification network consists of a multiple granularity feature extraction part and a combined loss part. In particular, the multiple granularity feature extraction part extracts global features and local features of different granularities from the feature maps of Conv4 and Conv5 of the ResNet50 backbone network, respectively, the extracted feature information is more comprehensive and discriminative. The combined loss part employs a joint representation learning and metric learning approach for supervised learning, which enables the model to learn more optimal parameters. The experimental results show that the Rank-1 accuracy of the multiple granularity person re-identification network reaches 95.2% and 88.2% on the Market1501 dataset and DukeMTMC-reID dataset, respectively, which illustrates the effectiveness of the model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.