Abstract

In practical application scenarios, the occlusion caused by various obstacles greatly undermines the accuracy of person re-identification. Most existing methods for occluded person re-identification focus on inferring visible parts of the body through auxiliary models, resulting in inaccurate feature matching of parts and ignoring the problem of insufficient occluded samples, which seriously affects the accuracy of occluded person re-identification. To address the above issues, we propose a multi-scale occlusion suppression network (MSOSNet) for occluded person re-identification. Specifically, we first propose a dual occlusion augmentation module (DOAM), which combines random occlusion with our proposed novel cross occlusion to generate more diverse occlusion data. Meanwhile, we design a novel occluded-aware spatial attention module (OSAM) to enable the network to focus on non-occluded areas of pedestrian images and effectively extract discriminative features. Ultimately, we propose a part feature matching module (PFMM) that utilizes graph matching algorithms to match non-occluded body parts of pedestrians. Extensive experimental results on both occluded and holistic datasets validate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call