Abstract

Vehicle Re-identification (Re-ID) refers to finding the same vehicle shot by other cameras from a given vehicle image library, which can also be regarded as a sub-problem of image retrieval. It plays an important role in intelligent transportation and smart cities. The key of vehicle Re-ID is to extract discriminative vehicle features. To better extract such features from the vehicle image to improve the recognition accuracy, we propose a three-branch adaptive attention network—Global Relational Attention and Multi-granularity Feature Learning (GRMF) to improve feature representation and discrimination. First, we divide the network into three branches, extracting different and useful features from three perspectives: spatial location, channel information, and local information. Second, we propose two effective global relational attention modules, which capture the global structural information for better attention learning. Specifically, to determine the importance level of a node, we use the global relationship between the node and all other nodes to infer the attention weight of the node directly. Third, according to the characteristics of the vehicle re-identification task, we introduce a suitable local partition strategy. It not only can simply capture subtle local information but also solve the problem of misalignment and within-part consistency disruption to a great extent. Extensive experiments demonstrate the effectiveness of our approach, and we achieve state-of-the-art results on two challenging datasets, including VeRi776 and VehicleID.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call