Abstract

Visible-Infrared person re-IDentification (VI-reID) aims to automatically retrieve the pedestrian of interest exposed to sensors in different modalities, such as visible camera v.s. infrared sensor. It struggles to learn both modality-invariant and discriminant representations. Unfortunately, existing VI-reID work mainly focuses on tackling the modality difference, which fine-grained level discriminant information has not been well investigated. This causes inferior identification performance. To address the problem, we propose a Dual-Alignment Part-aware Representation (DAPR) framework to simultaneously alleviate the modality bias and mine different level of discriminant representations. Particularly, our DAPR reduces modality discrepancy of high-level features hierarchically by back-propagating reversal gradients from a modality classifier, in order to learn a modality-invariant feature space. And meanwhile, multiple heads of classifiers with the improved part-aware BNNeck are integrated to supervise the network producing identity-discriminant representations w.r.t. both local details and global structures in the learned modality-invariant space. By training in an end-to-end manner, the proposed DAPR produces camera-modality-invariant yet discriminant features11The implementation code is available on https://github.com/pzhang0610/DAPR for the purpose of person matching across modalities. Extensive experiments are conducted on two benchmarks, i.e., SYSU MM01 and RegDB, and the results demonstrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call