Abstract

Neural sequence-to-sequence (seq2seq) grammatical error correction (GEC) models are usually computationally expensive both in training and in translation inference. Also, they tend to suffer from poor generalization and arrive at inept capabilities due to limited error-corrected data, and thus, incapable of effectively correcting grammar. In this work, we propose the use of neural cascading strategies in enhancing the effectiveness of neural sequence-to-sequence grammatical error correction models as inspired by post-editing processes of neural machine translations. The findings of our experiments show that adapting cascading techniques in low resource NMT models unleashes performances that is comparable to high setting NMT models. We extensively exploit and evaluate multiple cascading learning strategies and establish best practices toward improving neural seq2seq GECs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call