Abstract

Current person re-identification (ReID) methods heavily rely on well-annotated training data, and their performance suffers from significant degradation in the presence of noisy labels that are ubiquitous in real-life scenes. The reason is that noisy labels not only affect the prediction results of the classifier, but also impede feature refinement, making it difficult to distinguish between different person features. To address these issues, we propose an Adaptive Self-correction Classification (ASC) loss and an Adaptive Margin Self-correction Triplet (AMSTri) loss. Specifically, ASC loss helps the network to produce better predictions by balancing annotations and prediction labels, and pays more attention to the minority samples with the help of a focusing factor. On the other hand, the AMSTri loss introduces an adaptive margin that varies with sample features to accommodate complex data variations, and utilizes predicted labels to generate reliable triples for feature refinement. We then present an end-to-end adaptive self-correction joint training framework incorporating ASC loss and AMSTri loss to achieve a robust ReID model. Our comprehensive experiments demonstrate that the proposed framework outperforms most existing counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call