Text localization across multiple domains is crucial for applications like autonomous driving and tracking marathon runners. This work introduces DIPCYT, a novel model that utilizes Domain Independent Partial Convolution and a Yolov5-based Transformer for text localization in scene images from various domains, including natural scenes, underwater, and drone images. Each domain presents unique challenges: underwater images suffer from poor quality and degradation, drone images suffer from tiny text and loss of shapes, and scene images suffer from arbitrarily oriented, shaped text. Additionally, license plates in drone images may not provide rich semantic information compared to other text types due to loss of contextual information between characters. To tackle these challenges, DIPCYT employs new partial convolution layers within Yolov5 and integrates Transformer detection heads with a novel Fourier Positional Convolutional Block Attention Module (FPCBAM). This approach leverages common text properties across domains, such as contextual (global) and spatial (local) relationships. Experimental results demonstrate that DIPCYT outperforms existing methods, achieving F-scores of 0.90, 0.90, 0.77, 0.85, 0.85, and 0.88 on Total-Text, ICDAR 2015, ICDAR 2019 MLT, CTW1500, Drone, and Underwater datasets, respectively.
Read full abstract