Recently, scene Chinese recognition has attracted increasing attention. While mainstream scene text recognition methods exhibit outstanding performance in English recognition, they are considerably limited in Chinese recognition, due to inter-class similarity, intra-class variability, and complex combination of components in scene Chinese text. In this paper, we design Adaptive Position Encoding(APE) to enhance the model’s ability to perceive spatial information. Based on APE, we have innovatively designed Local Attention Module (LAM) and Global Attention Module (GAM). Specifically, LAM captures local features to identify common characteristics among characters of the same category, addressing the issue of intra-class variability. Meanwhile, LAM captures global features to identify the subordination relationships of Chinese character components. By integrating LAM and GAM, combining both local and global features, it is possible to find differences in the details among features that are fundamentally similar, thus solving the problem of inter-class similarity. Further, we contrive the transformer encoder–decoder structure to identify the vast variety of Chinese characters. Based on the Local/Global Attention Module and transformer encoder–decoder framework, we devise the novel sequence-to-sequence Local and Global Attention Network(LGANet), where both the backbone and the encoder/decoder are composed of attention mechanisms. Subsequent experiments on the Chinese scene dataset show that the recognition accuracy of our proposed LGANet is 77.3% and the normalized editing distance is 88.6%, both of which achieve the SOTA results in Fig. 1.
Read full abstract