Abstract

Grammatical Error Correction (GEC) is the task of correcting several diverse errors in a text such as spelling, punctuation, morphological, and word choice typos or mistakes. Expressed as a sentence correction task, models such as neural-based sequence-to-sequence (seq2seq) GECs have emerged to offer solutions to the task. However, neural-based seq2seq grammatical error correction models are computationally expensive both in training and in translation inference. Also, they tend to suffer from poor generalization and arrive at inept capabilities due to limited error-corrected data, and thus, incapable of effectively correcting grammar. In this work, we propose the use of Neural Cascading Architecture and different techniques in enhancing the effectiveness of neural sequence-to-sequence grammatical error correction models as inspired by post-editing processes of Neural Machine Translations (NMTs). The findings of our experiments show that, in low-resource NMT models, adapting the presented cascading techniques unleashes performances that is comparable to high setting NMT models, with improvements on state-of-the-art (SOTA) JHU FLuency- Extended GUG corpus (JFLEG) parallel corpus for developing and evaluating GEC model systems. We extensively exploit and evaluate multiple cascading learning strategies and establish best practices toward improving neural seq2seq GECs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call