Occludedperson re-identification (ReID) tasks pose a significant challenge in matching occluded pedestrians to their holistic counterparts across diverse camera views and scenarios. Robust representational learning is crucial in this context, given the unique challenges introduced by occlusions. Firstly, occlusions often result in missing or distorted appearance information, making accurate feature extraction difficult. Secondly, most existing methods focus on learning representations from isolated images, overlooking the potential relational information within image batches. To address these challenges, we propose a pose-guided partial-attention network with batch information (PPBI), designed to enhance both spatial and relational learning for occluded ReID tasks. PPBI includes two core components: (1) A node optimization network (NON) that refines the relationships between key-point nodes of a pedestrian to better address occlusion-induced inconsistencies. (2) A key-point batch attention (KBA) module that explicitly models inter-image interactions across batches to mitigate occlusion effects. Additionally, we introduce a correction of hard mining (CHM) module to handle occlusion-related misclassification and a batch enhancement (BE) model to strengthen key-point attention across image batches. Extensive experiments on occluded and holistic ReID tasks validate the effectiveness of PPBI. Our framework achieves a 2.7% mAP improvement over HoNeT on the Occluded-Duke dataset, demonstrating its robust performance.
Read full abstract