Abstract

With the development and popularization of unmanned aerial vehicles (UAVs) and surveillance cameras, vehicle re-identification (ReID) task plays an important role in the field of urban safety. The biggest challenge in the field of vehicle ReID is how to robustly learn the common visual representation of vehicle from different viewpoints while discriminate different vehicles with similar visual appearance. In order to solve this problem, this paper designs the normalized virtual softmax loss to enlarge the inter-class distance and decrease the intra-class distance, and a vehicle ReID model is proposed by jointly training the network with the proposed loss and triplet loss. In addition, we contribute a novel UAV vehicle ReID dataset from multiple viewpoint images to verify the robustness of methods. The experimental results show that comparing with the other softmax-based losses, our method achieves better performance and gets 76.70% and 98.95% in Rank-1 on VRAI and VRAI_AIR dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call