Abstract

Generative adversarial networks (GANs)-based person re-identification (re-id) schemes provide potential ways to augment data in practical applications. However, existing solutions perform poorly because of the separation of data generation and re-id training and a lack of diverse data in real-world scenarios. In this paper, a person re-id model (IDGAN) based on semantic map guided identity transfer GAN is proposed to improve the person re-id performance. With the aid of the semantic map, IDGAN generates pedestrian images with varying poses, perspectives, and backgrounds efficiently and accurately, improving the diversity of training data. To increase the visual realism, IDGAN utilizes a gradient augmentation method based on local quality attention to refine the generated image locally. Then, a two-stage joint training framework is employed to allow the GAN and the person re-id network to learn from each other to better use the generated data. Detailed experimental results demonstrate that, compared with the existing state-of-the-art methods, IDGAN is capable of producing high-quality images and significantly enhancing re-id performance, with the FID of generated images on the Market-1501 dataset being reduced by 1.15, and mAP on the Market-1501 and DukeMTMC-reID datasets being increased by 3.3% and 2.6%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.