Abstract

The number of existing person re-identification datasets is limited, and there are a series of changes such as illumination, background occlusion and pose among each dataset, which makes it difficult for the existing methods to learn robust feature representation, leading to a decline in recognition performance. To solve these problems, a person re-identification method combining style and pose generation is proposed in this paper. First with the impact of camera style differences in collecting images from different cameras, a style transformation method based on generative adversarial network is introduced into a person re-identification model, and cyclic generative adversarial networks (CycleGAN) is used to realize style transfer and reduce the influence of camera differences. Second, in view of the problem that when pedestrian pose changes greatly, easy to ignore identity-sensitive related information, AlphaPose is introduced to implement pose estimation. Combining style and posture for the first time and using improved deep convolution generative adversarial network (DCGAN) structure enrich the input sample information and generate unified style pose image; while using new synthetic data to train person re-identification network model improves the recognition performance of the model. Finally, further introducing random erasure method during data enhancement, in order to reduce the overfitting phenomena, improves the generalization ability of the network simultaneously and solves partial occlusion. The experimental results show that the proposed method outperforms typical style-based or pose-based methods. The accuracy of rank-1 and mAP on Market-1501 dataset is 90.4% and 74.5%, respectively, which are 2.28% and 5.78% higher, respectively. To a certain extent, the performance of person re-identification is improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call