Abstract
Person re-identification (ReID) is an important task in the field of intelligent security. A key challenge is how to capture human pose variations, while existing benchmarks (i.e., Market1501, DukeMTMC-reID, CUHK03, etc.) do NOT provide sufficient pose coverage to train a robust ReID system. To address this issue, we propose a pose-transferrable person ReID framework which utilizes posetransferred sample augmentations (i.e., with ID supervision) to enhance ReID model training. On one hand, novel training samples with rich pose variations are generated via transferring pose instances from MARS dataset, and they are added into the target dataset to facilitate robust training. On the other hand, in addition to the conventional discriminator of GAN (i.e., to distinguish between REAL/FAKE samples), we propose a novel guider sub-network which encourages the generated sample (i.e., with novel pose) towards better satisfying the ReID loss (i.e., cross-entropy ReID loss, triplet ReID loss). In the meantime, an alternative optimization procedure is proposed to train the proposed Generator-Guider-Discriminator network. Experimental results on Market-1501, DukeMTMC-reID and CUHK03 show that our method achieves great performance improvement, and outperforms most state-of-the-art methods without elaborate designing the ReID model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.