Abstract
Person re-identification aims to match pedestrian images across multiple surveillance camera views. It is still a challenging task due to the partial occlusion of pedestrian images, variations in the illumination of surveillance cameras, and similar appearances of pedestrians and so on. In order to improve the representation ability of pedestrian features extracted from the convolutional neural networks, in this paper, we proposed an improved split-attention architecture for person re-identification. Specifically, we first divide the feature map into two sub-groups and then split the features in each subgroup into three more fine-grained sub-feature maps. Moreover, in order to minimize the inter-class similarity and maximize the intra-class similarity, we use circle loss and identification loss to optimize our network together. Circle loss makes the similarity scores learn at different paces, which benefits deep feature learning. The circle loss not only makes the model have higher optimization flexibility but also makes the convergence target of the model more definite. Unlike many methods that use complex convolutional neural networks to represent pedestrian feature maps in a layer-wise manner, our proposed method improves the representation ability of pedestrian features at a more fine-grained level. We evaluated the performance of our proposed network on two large-scale person re-identification benchmark datasets Market-1501 and DukeMTMC-reID. Experimental results show that the proposed split-attention network outperforms the state-of-the-art methods on both datasets with only using pedestrian global features.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Intelligent Systems and Applications in Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.