Abstract

Natural scene text recognition is a very challenging task. Attention-based encoder-decoder framework has achieved the state-of-the-art performance. However, for some complex and/or low-quality images, the alignments estimated by the content-based attention network are not accurate enough, and so, the generated glimpse vector is also not powerful enough to represent the predicted character at current time step. To solve this problem, in the paper we propose a memory-augmented attention model for scene text recognition. The proposed memory-augmented attention network (MAAN) feeds the part of character sequence already generated and all attended alignment history to the attention model when predicting the character at current time step. The whole network can be trained end-to-end. Experimental results on several challenging benchmark datasets demonstrate that the proposed memory-augmented attention model for scene text recognition can achieve a comparable or better performance compared with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call