Abstract

Spatial–temporal information is easy to achieve in a practical surveillance scene, but it is often neglected in most current person reidentification (ReID) methods. Employing spatial–temporal information as a constrain has been verified as beneficial for ReID. However, there is no effective modeling according to the pedestrian movement law. In this paper, we present a ReID framework with internal and external spatial–temporal constraints, termed as IESC-ReID. A novel residual spatial attention module is proposed to build a spatial–temporal constraint and increase the robustness to partial occlusions or camera viewpoint changes. A Laplace-based spatial–temporal constraint is also introduced to eliminate irrelevant gallery images, which are gathered by the internal learning network. IESC-ReID constrains the attention within the functioning range of the channel space, and utilizes additional spatial–temporal constrains to further constrain results. Intensive experiments show that these constraints consistently improve the performance. Extensive experimental results on numerous publicly available datasets show that the proposed method outperforms several state-of-the-art ReID algorithms. Our code is publicly available at https://github.com/jiaming-wang/IESC.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call