Abstract

The performance of person Re-identification (Re-ID) model depends much on its training dataset, and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In this paper, we proposed multi-scale Feature Enhancement(MFE) Re-ID model and Feature Preserving Generative Adversarial Network (FPGAN) for cross-domain person Re-ID task. Here, MFE Re-ID model provides a strong baseline model for cross-domain person Re-ID task, and FPGAN bridges the domain gap to improve the performance of Re-ID on target scene. In the MFE Re-ID model, person semantic feaure maps, extracted from backbone of segmentation model, enhance person body region’s multi-scale feature responce. This operation could capture multi-scale robust discriminative visual factors related to person. In FPGAN, we translate the labeled images from source to target domain in an unsupervised manner, and learn a transfer function to preserve the person perceptual information of source images and ensure the transferred person images show similar styles with the target dataset. Extensive experiments demonstrate that combining FPGAN and MFE Re-ID model could achieve state-of-the-art results in cross-domain Re-ID task on DukeMTMC-reID and Market-1501 datasets. Besides, MFE Re-ID model could achieve state-of-the-art results in supervised Re-ID task. All source codes and models will be released for comparative study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call