Abstract

Clothes changing is one of the challenges in person re-identification (ReID), since clothes provide remarkable and reliable information for decision, especially when the resolution of an image is low. Variation of clothes significantly downgrades standard ReID models, since the clothes information dominates the decisions. The performance of the existing methods considering clothes changing is still not satisfying, since they fail to extract sufficient identity information that excludes clothes information. This study aims to disentangle identity, clothes, and unrelated features with a Generative Adversarial Network (GAN). A GAN model with three encoders, one generator, and three discriminators, and its training procedure are proposed to learn these kinds of features separately and exclusively. Experimental results indicate that our model generally achieves the best performance among state-of-the-art methods in both ReID tasks with and without clothes changing, which confirms that the identity, clothes, and unrelated features are extracted by our model more precisely and effectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call