Abstract

Lexical Normalization (LN) aims to normalize a nonstandard text to a standard text. This problem is of extreme importance in natural language processing (NLP) when applying existing trained models to user-generated text on social media. Users of social media tend to use non-standard language. They heavily use abbreviations, phonetic substitutions, and colloquial language. Nevertheless, most existing NLP-based systems are often designed with the standard language in mind. However, they suffer from significant performance drops due to the many out-of-vocabulary words found in social media text. In this paper, we present a new (LN) technique by utilizing a transformer-based sequence-to-sequence (Seq2Seq) to build a multilingual characters-to-words machine translation model. Unlike the majority of current methods, the proposed model is capable of recognizing and generating previously unseen words. Also, it greatly reduces the difficulties involved in tokenizing and preprocessing the nonstandard text input and the standard text output. The proposed model outperforms the winning entry to the Multilingual Lexical Normalization (MultiLexNorm) shared task at W-NUT 2021 on both intrinsic and extrinsic evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call