Abstract

As a core technology in intelligent transportation systems, vehicle re-identification has attracted growing attention. Most existing methods use CNNs to extract global and local features from vehicle images and roughly integrate them for identifying vehicles, addressing intra-class similarity and inter-class difference. However, a significant challenge arises from redundant information between global and local features and possible misalignment among local features, resulting in suboptimal efficiency when combined. To further improve vehicle re-identification, we propose a stripe-assisted global transformer (SaGT) method, which leverages a dual-branch network based on transformers to learn a discriminative whole representation for each vehicle image. Specifically, one branch exploits a standard transformer layer to extract a global feature, while the other branch employs a stripe feature module (SFM) to construct stripe-based features. To further facilitate the effective incorporation of local information into the learning process of the global feature, we introduce a novel stripe-assisted global loss (SaGL), which combines ID losses to optimize the model. Considering redundancy, we only use the global feature for inference, as we enhance the whole representation with stripe-specific details. Finally, we introduce a spatial-temporal probability (STPro) to provide a complementary metric for robust vehicle re-identification. Extensive and comprehensive evaluations on two public datasets validate the effectiveness and superiority of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call