Abstract
Spatial–temporal information is easy to achieve in a practical surveillance scene, but it is often neglected in most current person reidentification (ReID) methods. Employing spatial–temporal information as a constrain has been verified as beneficial for ReID. However, there is no effective modeling according to the pedestrian movement law. In this paper, we present a ReID framework with internal and external spatial–temporal constraints, termed as IESC-ReID. A novel residual spatial attention module is proposed to build a spatial–temporal constraint and increase the robustness to partial occlusions or camera viewpoint changes. A Laplace-based spatial–temporal constraint is also introduced to eliminate irrelevant gallery images, which are gathered by the internal learning network. IESC-ReID constrains the attention within the functioning range of the channel space, and utilizes additional spatial–temporal constrains to further constrain results. Intensive experiments show that these constraints consistently improve the performance. Extensive experimental results on numerous publicly available datasets show that the proposed method outperforms several state-of-the-art ReID algorithms. Our code is publicly available at https://github.com/jiaming-wang/IESC.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.