Abstract

Person re-identification (reid) aims at matching pedestrians observed from non-overlapping camera views. It has important applications in surveillance video analysis such as human retrieval, human tracking and activity analysis. Although a large number of effective feature learning and distance metric optimizing approaches have been proposed, it still suffers from pedestrian appearance variations caused by pose changing. Most of the previous methods address this problem by learning a pose-invariant descriptor subspace. In this paper, we propose a pose variation adaptation method for person reid. It can reduce the probability of deep learning network over-fitting. Specifically, we introduce a pose transfer generative adversarial network with a similarity measurement module. With the learned pose transfer model, training images can be transferred to any given poses, and with the original images, forming an augmented training dataset. It increases data diversity against over-fitting. In contrast to previous GAN-based methods, we consider the influence of pose variations on similarity measures to generate shaper and more realistic samples for person reid. Besides, we optimize hard example mining to introduce a novel manner of samples used with the learned pose transfer model. It focuses on the inferior samples which are caused by pose variations to increase the number of effective hard examples for learning discriminative features and improving the generalization ability. We extensively conduct comparative evaluations to demonstrate the advantages and superiorities of the proposed method over the state-of-the-art person reid approaches on Market-1501 and DukeMTM C- reID.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.