Abstract

Recent advanced methods address the cross-domain person re-identification (Re-ID) problem primarily by the pseudo label estimation. However, person images with similar backgrounds are usually incorrectly clustered due to the interference of background noise. Furthermore, it is the lack of filtering of incorrect pseudo labels that results in the poor quality of the final generated pseudo labels. Here, a progressive learning approach based on background suppression and identity consistency for cross-domain person Re-ID (BSIC-reID) is proposed. In the background suppression module, background mask attention and reverse attention are combined to effectively extract pedestrian features and suppress background noise, highlighting the foreground person information for person Re-ID. In addition, the BSIC-reID model is used to extract multi-scale person features and generate different perspective pseudo labels for the target domain images. The incorrect pseudo labels are filtered by comparing the potential similarity of multi-scale person features so that the higher quality pseudo labels for the target domain images could be generated. This method is performed on the Market-1501, DukeMTMC-reID, and MSMT17, evaluated by the Cumulated Matching Characteristics (CMC) and mean Average Precision (mAP). The experimental results demonstrate that this method also achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call