Abstract

Given an RGB image (or an IR image) of a pedestrian, the task of cross-modality person re-identification aims to retrieve images of a specific pedestrian from an IR (or RGB) gallery. However, the huge modality discrepancy between RGB and IR images significantly affects the performance of cross-modality person re-identification. For the purpose of reducing the impact of modality discrepancy and extracting more discriminative pedestrian features, a new cross-modality person re-identification network is proposed by us, which is based on intermediate modality image generation and discriminative information enhancement (IGIE-Net). Specifically, we design an intermediate modality image generation module (IMIGM) to effectively weaken the impact of modality discrepancy, which first extracts the modality-specific and modality-shared information contained in RGB and IR images separately, and then generates intermediate RGB and intermediate IR images by adaptively fusing original RGB and original IR images with their corresponding modality-specific and modality-shared information separately. In addition, we also design a discriminative information enhancement module (DIEM) to obtain more discriminative pedestrian representations by enhancing the discriminative information contained in the deep features of pedestrians. Extensive experiments on the publicly available SYSU-MM01 and RegDB datasets indicate that the performance of IGIE-Net reaches the current advanced level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call