Abstract

Text recognition in scene images is still considered as a challenging task for the computer vision and pattern recognition community. For text images affected by multiple adverse factors, such as occlusion (due to obstacles) and poor quality (due to blur and low resolution), the performance of the state-of-the-art scene text recognition methods degrades. The key reason is that the existing encoder–decoder framework follows fixed left-to-right decoding order, which lacks sufficient contextual information. In this paper, we present a novel decoding order where good-quality characters can first be decoded followed by low-quality characters, which preserves the contextual information irrespective of the aforementioned difficult scenarios. Our method, named NDOrder, extracts visual features with a ViT encoder and then decodes with the Random Order Generation module (ROG) for learning to decode with random decoding orders and the Vision-Content-Position module (VCP) for exploiting the connections among visual information, content and position. In addition, a new dataset named OLQT (Occluded and Low-Quality Text) is created by manually collecting text images that suffer from occlusion or low-quality from several standard text recognition datasets. The dataset is now available at https://github.com/djzhong1/OLQT. Experiments on OLQT and public scene text recognition benchmarks show that the proposed method achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call