Abstract

In this paper, we propose a novel data augmentation method to dynamically learn occluded samples via adversarial learning for person re-identification (re-ID) in sensor networks. Specifically, we design two CNN models to learn original-image features and occluded-image features, respectively. As for occluded-image features, we extract the most salient region from the attention map to obtain the meaningful occluded region. To match the CNN status, we dynamically occlude pedestrian images in each iteration and meanwhile generate training pedestrian images with high diversity. We also employ adversarial learning to improve the generalization ability of CNN model. A discriminator is introduced to distinguish original-image features and occluded-image features, and occluded-image features are optimized to confuse the discriminator. As a result, the representations for pedestrian images contain the discriminative complementary information. We conduct extensive experiments on Market1501, DukeMTMC-reID and CUHK03, and the experimental results verify that the proposed method exceeds the state-of-the-art methods by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call