Abstract
Text localization in the wild is challenging due to the presence of 2D and 3D texts, the presence of shadows, arbitrary orientated text with non-linear arrangements, varying lighting conditions as well as complex background. This paper proposes the first approach for 3D text localization in natural scene images through shadow removal and a new deep CNN model. In a first step, exploiting the observation that 3D text generates shadow information in natural scenes, the proposed model detects and removes the shadow pixels of 3D text based on the Generalized Gradient Vector Flow concept and a new clustering approach. The performance of the classification of 2D and 3D texts in the scene images is strengthened by using key features, including pixel strength, sharpness and edge potential, which are extracted to eliminate false text and shadow pixels. For text localization after removing shadow information, EfficientNet is used as an encoder (backbone) and UNet as a decoder in a novel way employing differential binarization. Experimental validation and comparative analysis with state-of-the-art approaches on both a new purpose-built dataset as well as on the benchmark datasets of ICDAR MLT 2019, ICDAR ArT 2019, CTW1500, DAST1500, Total-Text, and MSRATD500 for each of the different steps of the method, show that the proposed approach outperforms the existing methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.