Abstract

Text localization across multiple domains is crucial for applications like autonomous driving and tracking marathon runners. This work introduces DIPCYT, a novel model that utilizes Domain Independent Partial Convolution and a Yolov5-based Transformer for text localization in scene images from various domains, including natural scenes, underwater, and drone images. Each domain presents unique challenges: underwater images suffer from poor quality and degradation, drone images suffer from tiny text and loss of shapes, and scene images suffer from arbitrarily oriented, shaped text. Additionally, license plates in drone images may not provide rich semantic information compared to other text types due to loss of contextual information between characters. To tackle these challenges, DIPCYT employs new partial convolution layers within Yolov5 and integrates Transformer detection heads with a novel Fourier Positional Convolutional Block Attention Module (FPCBAM). This approach leverages common text properties across domains, such as contextual (global) and spatial (local) relationships. Experimental results demonstrate that DIPCYT outperforms existing methods, achieving F-scores of 0.90, 0.90, 0.77, 0.85, 0.85, and 0.88 on Total-Text, ICDAR 2015, ICDAR 2019 MLT, CTW1500, Drone, and Underwater datasets, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.