Abstract
Person re-identification aims to retrieve the pedestrian across different cameras. It is still a challenging task for the intelligent visual surveillance system because of similar appearances, camera shooting angles, scene illumination, and pedestrian pose. In this paper, we propose a novel two-stream network named spatial segmentation network that learns both the global and local features in a unified framework for nonaligned person re-identification. One stream focuses on spatial feature learning using global adaptive average pooling in deep convolutional neural networks. Another stream is utilized to learn the fine local features by adopting horizontal average pooling without division that depends on the pose predictor. To assess the importance ranking of all features, we also obtain the performance of every part feature and global features. Our evaluation of the proposed method on Market-1501 acquires 94.51% Rank-1 and 90.78% mAP, that on DukeMTMC-re-ID acquires 87.52% Rank-1 and 84.82% mAP, and that on CHUK03-detected acquires 69.71% Rank-1 and 71.67% mAP; these findings verify the state-of-the-art performance of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.