Abstract

Person re-identification performance has progressed greatly in recent years, and this technique has been applied to many fields. While many person re-identification algorithms are devoted to building complex network structures for robustness and accuracy, this approach often places a high demand on hardware and software support. To overcome the performance-complexity trade-off, in this paper we propose a Semantic Attention Network(SA-Net), which does not need a complex network while bringing clear performance gain. Firstly, we design a lightweight semantic segmentation subnetwork, which can effectively remove the background redundant information. As a result, in the process of person re-identification, only the specific pedestrian features can be obtained, and the network can pay more attention to pedestrian parts rather than to the background. Secondly, considering the importance of attention mechanism in person re-identification tasks and to integrate the pedestrian features better, we apply the typical attention mechanism proposed in [5] and improve the module to integrate the persons' channel feature faster. The proposed Semantic Attention network has been evaluated on two publicly available benchmark datasets to show its superior performance over the state-of-the-art methods. The top-1 average accuracy of our method on the Market-1501 dataset reaches 96.35%. Compared the typical person re-identification model: Attentive but Diverse Network [3], the top-1 accuracy of our theory increases by 0.75% on Market-1501 dataset and 0.97% on DukeMTMC-reID dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.