Clothes-changing person re-identification (CC-ReID) is of great importance in the field of intelligent surveillance as it can be used to help police officers find criminals who want to evade pursuit by changing clothes. Currently, the best-performing algorithm overly emphasizes discriminative information in appearance, which limits the generalization capability of CC-ReID. In this paper, we innovatively propose an Appearance-Pose Joint Coordinate Information Collaboration Model (AP-ICM), which further enhances the generalization capability of the CC-ReID method. FIRST, the pose joint coordinates, a novel auxiliary information that encodes the body shape, is introduced. Compared to the existing auxiliary information ( i.e. contour sketches, human masks, heatmaps) used to encode body shape, it avoids the interference of changes of clothing shape and the interference from areas unrelated to the joints, thus the model can learn discriminative information from the body shape more efficiently. Experiments demonstrate that the pose joint coordinates is more effective than these existing auxiliary information in assisting the model to learn discriminative information from the body shape. THEN, a novel Second-order Structural Information is designed to help the model learn the discriminative information from the pose joint coordinates more easily, thus improving the generalization capability. Experiment demonstrate that the Second-order Structural Information is effective in enhancing generalization capacity. NEXT, a novel Semantic Representation endows the model with the ability to understand semantic differences between joints, so that the model can learn features more accurately during training, which in turn makes it easier to acquire strong generalization capability. Experiment demonstrate that the Semantic Representation is effective in enhancing generalization capacity. FINALLY, by adopting a Information Collaboration Module, we successfully integrate discriminative information extracted from the pose joint coordinates into the best-performing algorithm. Experiments demonstrate that our AP-ICM achieves Rank-1 of 68.50% and 82.90% on two generic CC-ReID datasets, PRCC and VC-Clothes, respectively, outperforming the best-performing algorithm by 2.30% and 2.90%, respectively.
Read full abstract