Abstract

Chinese spelling correction (CSC) is a difficult but gratifying work that not only helps individuals read and understand the material in their daily lives but also serves as pre-processing for numerous natural language processing (NLP) applications. With pre-trained language models like BERT, many recent works have had great success. However, these methods have three limitations. Firstly, they rely on the output of the last layer of the correction model for parameter updates, ignoring the guidance of intermediate features. This can prevent the model from being adequately trained and thus lead to overfitting. Secondly, they use a variety of data augmentations, which depend on expert knowledge. Some specific augmentation rules only improve the performance on the target dataset, and for out-of-set data, the performance decreases instead. Thirdly, due to the nature of statistical models, they tend to transform low-frequency expressions into more common high-frequency expressions, although correct expressions should be preserved as much as possible for error correction tasks. In this work, we propose a novel general framework for CSC that takes advantage of metric learning to learn semantic knowledge adaptively from multiple intermediate features. Our approach only feeds the input text and target text of the parallel corpus to the model separately for the spelling error correction task without any data augmentation. Moreover, we designed a replication mechanism to keep those low-frequency correct expressions instead of over-correcting them. Our suggested solution can greatly outperform the SOTA methods in the CSC problem, according to extensive trials on benchmarks. We will release the source code for further use by the community.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call