Abstract
Most person re-identification (Re-ID) approaches rely excessively on a great quantity of annotated training data. However, due to sampling errors or annotated errors, the label noise is unavoidable, which usually causes a dramatic decrease in the performance of existing Re-ID methods. To address this problem, we propose the label reliability perception (LRP) for person Re-ID by refining noisy labels. Specifically, a feature-fusion block (FFB) is proposed to enhance the discrimen- ability of pedestrians' features by expanding the network's attention span due to the fused feature, which is generated by overlapping the coarse-grained feature obtained by global average pooling and fine-grained features obtained by evenly dividing the feature map in the height dimension and performing global max pooling. In addition, the label dual perception (LDP) is proposed to refine noisy labels instead of filtering samples by evaluating the reliability of each training sample's label. Specifically, we meticulously design five evaluation modes for each sample to perceive the reliability of the labels of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> -nearest neighbor images. Finally, we utilize the most reliable label to replace the noisy label and optimize the network. Extensive experiments prove the superiority of the proposed model over the competing methods; for instance, on Market1501, our method achieves 88.8% rank-1 accuracy and 70.5% mAP (4.7% and 4.3% improvements over the state-of-the-arts) under noise ratio 20%, and similarly on DukeMTMC-ReID, our method achieves 77.7% and 60.3%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.