Abstract

As an indispensable part of intelligent transportation system (ITS), vehicle re-identification (Re-ID) aims to retrieve all target vehicle images captured from non-overlapping cameras. However, this task remains very challenging due to the variation of camera perspective and the similar appearance among the vehicles with the same type and color, i.e., large intra-class variances and small inter-class variances. Previous methods have made a great progress on vehicle Re-ID by leveraging local details and aligning local features, this issue is still far from being solved. In this work, we propose to decouple identity-unrelated information from vehicle representation, tackling the problems of camera perspective variation and vehicle appearance similarity. The keypoint of this method is to learn a distinguishable feature embedding that is independent of identity-unrelated information. Specially, the novel Identity-Unrelated Information Decoupling (IUID) paradigm is designed to learn invariant features of the vehicle with the same ID in different scenes. In our approach, identity-unrelated information can be divided into two kinds of information, i.e., camera perspective information and background information. For the former, through a feature-level camera generative adversarial module, we can decouple camera perspective information from the feature embedding after extracting invariant features across different cameras perspective. For the latter, we propose a vehicle-mask transformer to enhance the attention of the model on local details while reducing the impact of background information. Extensive experiments on two public datasets demonstrate the superiority of IUID over the current state-of-the-arts methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call