Abstract

Vehicle re-identification (VeID) has attracted a growing research interest in recent years, and excellent performance has been shown with fixed traffic cameras. However, vehicle ReID in aerial images taken by unmanned aerial vehicles (UAVs), possessing both variable locations and special viewpoints, is still under-explored. Recent works tend to extract meaningful local features by careful annotation, which are effective but time-consuming. In order to extract discriminative features and avoid tedious annotating work, this letter develops an attention mask (AM)-based network with simple color annotation for object enhancement and background reduction. The network makes full use of deep features obtained by a pretrained color classification network and then utilizes principal component analysis (PCA) as a mapping function to achieve AMs without partial annotation. Besides, we introduce weighted triplet loss (WTL) function to deal with the problem of great similarity between classes caused by overlook views of UAVs. The loss function concentrates more on negative pairs to facilitate the identification ability of network. Rich experiments are conducted on both UAV dataset and surveillance dataset, and our method achieves competitive performance compared with recent ReID works.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.