Abstract

In recent years, the challenge of distinguishing vehicles of the same model has prompted a shift towards leveraging both global appearances and local features, such as lighting and rearview mirrors, for vehicle re-identification (ReID). Despite advancements, accurately identifying vehicles remains complex, particularly due to the underutilization of highly discriminative text regions. This paper introduces the Text Region Attention Network (TANet), a novel approach that integrates global and local information with a specific focus on text regions for improved feature learning. TANet uniquely captures stable and distinctive features across various vehicle views, demonstrating its effectiveness through rigorous evaluation on the VeRi-776, VehicleID, and VERI-Wild datasets. TANet significantly outperforms existing methods, achieving mAP scores of 83.6% on VeRi-776, 84.4% on VehicleID (Large), and 76.6% on VERI-Wild (Large). Statistical tests further validate the superiority of TANet over the baseline, showcasing notable improvements in mAP and Top-1 through Top-15 accuracy metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.