Abstract

Arbitrarily shaped scene text detection has witnessed great development in recent years, and text detection using segmentation has been proven to an effective approach. However, problems caused by the diverse attributes of text instances, such as shapes, scales, and presentation styles (dense or sparse), persist. In this paper, we propose a novel text detector, termed DText, which can effectively formulate an arbitrarily shaped scene text detection task based on dynamic convolution. Our method can dynamically generate independent text-instance-aware convolutional parameters for each text instance from multi-features thus overcoming some intractable limitations of arbitrary text detection, such as the splitting of similar adjacent text, which poses challenges to fixed instance-shared convolutional parameters-based methods. Unlike standard segmentation methods relying on regions-of-interest bounding boxes, DText focuses on enhancing the flexibility of the network to retain details of instances from diverse resolutions while effectively improving prediction accuracy. Moreover, we propose encoding the shape and position information according to the characteristics of the text instance, termed text-shape sensitive position embedding. Thus, it can provide explicit shape and position information to the generator of the dynamic convolution parameters. Experiments on five benchmarks (Total-Text, SCUT-CTW1500, MSRA-TD500, ICDAR2015, and MLT) showed that our method achieves superior detection performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.