Abstract

Vehicle re-identification (V-ReID) is a critical task that aims to match the same vehicle across images from different camera viewpoints. The previous studies have leveraged attribute clues, such as color, model, and license plate, to enhance the V-ReID performance. However, these methods often lack effective interaction between the global–local features and the final V-ReID objective. Moreover, they do not address the challenging issues in real-world scenarios, such as high viewpoint variations, extreme illumination conditions, and car appearance changes (e.g., due to damage or wrong driving). We propose a novel framework to tackle these problems and advance the research in V-ReID, which can handle various types of car appearance changes and achieve robust V-ReID under varying lighting conditions. Our main contributions are as follows: (i) we propose a new Re-ID architecture named global–local self-attention network, which integrates local information into the feature learning process and enhances the feature representation for V-ReID and (ii) we introduce a novel damaged vehicle Re-ID dataset called VERI-D, which is the first publicly available dataset that focuses on this challenging yet practical scenario. The dataset contains both natural and synthetic images of damaged vehicles captured from multiple camera viewpoints and under different lighting conditions. (iii) We conduct extensive experiments on the VERI-D dataset and demonstrate the effectiveness of our approach in addressing the challenges associated with damaged vehicle re-identification. We also compare our method to several state-of-the-art V-ReID methods and show its superiority.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.