Abstract

Occluded person re-identification (Re-ID) is a challenging task that aims to match occluded person images to holistic ones across different camera views. Feature diversity is crucial for achieving high-performance Re-ID. Previous methods rely on additional annotations or hand-crafted rules to achieve feature diversity, which are inefficient or infeasible for occluded Re-ID. To address this, we propose the Diverse Attention Net (DANet) which utilizes attention mechanism to achieve diverse feature mining. Specifically, DANet incorporates a pair of complementary Diverse Parallel Attention Modules (DPAM), which, under the attention decorrelation constraint (ADC), help the model automatically capture diverse discriminative features in a global scope. Additionally, we propose an Efficient Transformer layer that can seamlessly integrate with the proposed DPAM and synergistically enhance the capability to handle occlusions. The resulting DANet construct a set of comprehensive representations that encode diverse discriminative features. Extensive experiments demonstrate DANet achieves state-of-the-art performance on both occluded and holistic ReID benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call