Abstract

Person re-identification (ReID) is one of the commonly used criminal investigation methods in reconnaissance. Although the current ReID has achieved robust results on single domains, the focus of researches has shifted to cross-domain in recent years, which is caused by domain bias between different datasets. Generative Adversarial Networks (GAN) is used to realize the image style transfer of different datasets to alleviate the effect of cross-domain. However, the existing GAN-based models ignore complete expressions and occlusion of pedestrian characteristics, resulting in low accuracy in feature extraction. To address these issues, we introduce a cross domain model based on feature fusion (FFGAN) to fuse global, local and semantic features to extract more delicate pedestrian features. Before extracting pedestrian features, we preprocess feature maps with a feature erasure block to solve an occlusion issue. Finally, FFGAN enables a more complete visual description of pedestrian characteristics, thereby improving the accuracy of FFGAN in identifying pedestrians. Experimental results show that the effect of FFGAN is significantly improved compared with some advanced cross-domain ReID algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call