In this paper, we present three major challenges in occluded person Re-Identification (ReID): different occlusions, background interference, and dataset bias. To address the first and second challenges, our approach incorporates pedestrian segmentation to distinguish between pedestrian and non-pedestrian regions. Additionally, to tackle the third challenge, we introduce an effective image enhancement method called Enhanced Random Occluding (ERO). ERO leverages other datasets with segmentation annotations to compensate for the lack of detailed annotations in ReID datasets. We compare the effectiveness of ERO with existing methods. To utilize the prior knowledge obtained by ERO, we introduce the Priori Segmentation Module (PSM) and the Domain Generalization Module (DGM). The PSM module enables learning out-of-domain prior knowledge without relying on external networks, while the DGM module transfers this knowledge to the current domain. Finally, we utilize the obtained segmentation results as attention maps for feature aggregation. ERO, PSM, and DGM together constitute the Domain Generalization Segmentation Network (DGSN). Our experimental results on occluded and holistic person ReID benchmarks demonstrate the superiority of the proposed DGSN. On the Occluded-Duke dataset, we achieved a mAP of 69.9% (+2.0%) and a rank-1 accuracy of 60.7% (+0.3%), surpassing state-of-the-art methods significantly.
Read full abstract