Abstract

Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.