Abstract

Existing Visible-Thermal Person Re-identification (VT-REID) methods usually adopt two-stream networks for cross-modality images. The two streams are trained to extract features from different modality images respectively. In contrast, we design a Modality Adversarial Neural Network (MANN) to solve VT-REID problem. Our proposed MANN includes a one-stream feature extractor and a modality discriminator. The heterogeneous images are processed by the feature extractor to generate modality-invariant features. And the designed modality discriminator aims to distinguish whether the extracted features are from visible or thermal modality. Moreover, our advanced dual-constrained triplet loss is introduced for better cross-modality matching performance. The experiments on two cross-modality person re-identification datasets show that MANN can effectively learn modality-invariant features and outperform state-of-the-art methods by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call