Abstract
Clothes changing is one of the challenges in person re-identification (ReID), since clothes provide remarkable and reliable information for decision, especially when the resolution of an image is low. Variation of clothes significantly downgrades standard ReID models, since the clothes information dominates the decisions. The performance of the existing methods considering clothes changing is still not satisfying, since they fail to extract sufficient identity information that excludes clothes information. This study aims to disentangle identity, clothes, and unrelated features with a Generative Adversarial Network (GAN). A GAN model with three encoders, one generator, and three discriminators, and its training procedure are proposed to learn these kinds of features separately and exclusively. Experimental results indicate that our model generally achieves the best performance among state-of-the-art methods in both ReID tasks with and without clothes changing, which confirms that the identity, clothes, and unrelated features are extracted by our model more precisely and effectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.