Abstract

Reading text in the wild is a challenging task in computer vision. Existing approaches mainly adopt connectionist temporal classification (CTC) or attention models based on recurrent neural network (RNN), and are computationally expensive and hard to train. In this paper, instead of the chain structure of RNN, we propose an end-to-end fully convolutional network with the stacked convolutional layers to effectively capture the long-term dependencies among elements of scene text image. The stacked convolutional layers are much more efficient than bidirectional long short-term memory (BLSTM) in modeling the contextual dependency. In addition, we design a discriminative feature encoder by incorporating the residual attention blocks into a small densely connected network to enhance the foreground text and suppress the background noise. Extensive experiments on seven standard benchmarks, the Street View Text, IIIT5K, ICDAR03, ICDAR13, ICDAR15, COCO-Text and Total-Text, validate that our method not only achieves state-of-the-art or highly competitive recognition performance, but significantly improves the efficiency and reduces the number of parameters as well.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.