Abstract

The person re-identification (re-ID) problem has attracted growing interest in the computer vision community. Most public re-ID datasets are captured by multiple non-overlapping cameras, and the same person may appear dissimilar in different camera views due to variances of illuminations, viewpoints and postures. These differences, collectively referred to as camera style variance, make person re-ID still a challenging problem. Recently, researchers have attempted to solve this problem using generative models. The generative adversarial network (GAN) is widely used for the pose transfer or data augmentation to bridge the camera style gap. However, these methods, mostly based on image-level GAN, require huge computational power during the training of generative models. Furthermore, the training process of GAN is separated from the re-ID model, which makes it hard to achieve a global optimal for both models simultaneously. In this paper, the authors propose to alleviate camera style variance in the re-ID problem by adopting a feature-level Camera Style Transfer (CST) model, which can serve as an intra-class augmentation method and enhance the model robustness against camera style variance. Specifically, the proposed CST method transfers the camera style-related information of input features while preserving the corresponding identity information. Moreover, the training process can be embedded into the re-ID model in an end-to-end manner, which means the proposed approach can be deployed with much less time and memory cost. The proposed approach is verified on several different person re-ID baselines. Extensive experiments show the validity of the proposed CST model and its benefits for re-ID performance on the Market-1501 dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.