Abstract

Person re-identification (ReID) has attracted more and more attention, which is to retrieve interested persons across multiple non-overlapping cameras. Matching the same person between different camera styles has always been an enormous challenge. In the existing work, cross-camera styles images generated by the cycle-consistent generative adversarial network (CycleGAN) only transfer the camera resolution and ambient lighting. The generated images produce considerable redundancy and inappropriate pictures at the same time. Although the data is added to prevent over-fitting, it also makes significant noise, so the accuracy is not significantly improved. In this paper, an improved CycleGAN is proposed to generate images for achieving improved data augmentation. The transfer of pedestrian posture is added at the same time as transferring the image style. It not only increases the diversity of pedestrian posture but also reduces the domain gap caused by the style change between cameras. Besides, through the multi-pseudo regularized label (MpRL), the generated images are assigned virtual tags dynamically in training. Through many experimental evaluations, we have achieved a very high identification accuracy on Market-1501, DukeMTMC-reID, and CUHK03-NP datasets. On the three datasets, the quantitative results of mAP are 96.20%, 93.72%, and 86.65%, and the quantitative results of rank-1 are 98.27%, 95.37%, and 90.71%, respectively. The experimental results fully show the superiority of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call