Abstract

Recognizing texts in images plays an important role in many applications, such as industrial intelligence, robot vision, automatic driving, command assistance, and scene understanding. Although great progress has been achieved in various fields, research on complex systems modeling using text recognition technology requires further attention. To address this, we propose a new end-to-end multi-task learning method, which includes a super-resolution branch (SRB) and a recognition branch. To effectively learn the semantic information of images, we utilize the feature pyramid network (FPN) to fuse high- and low-level semantic information. The feature map generated by FPN is then delivered separately to the super-resolution branch and the recognition branch. We introduce a novel super-resolution branch, the SRB based on the proposed dual attention mechanism (DAM), designed to enhance the capability of learning low-resolution text features. The DAM incorporates the residual channel attention to enhance channel dependencies and the character attention module to focus on context information. For the recognition branch, the feature map generated by FPN is fed into an RNN sequence module, and an attention-based decoder is constructed to predict the results. To address the issue of low-resolution text recognition in numerous Chinese scenes, we propose the Chinese super-resolution datasets instead of relying on traditional down-sampling techniques to generate training datasets. Experiments demonstrate that the proposed method performs robustly on low-resolution text images and achieves competitive results on benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call