Abstract

One of the most persistent characteristics of written user-generated content (UGC) is the use of non-standard words. This characteristic contributes to an increased difficulty to automatically process and analyze UGC. Text normalization is the task of transforming lexical variants to their canonical forms and is often used as a pre-processing step for conventional NLP tasks in order to overcome the performance drop that NLP systems experience when applied to UGC. In this work, we follow a Neural Machine Translation approach to text normalization. To train such an encoder-decoder model, large parallel training corpora of sentence pairs are required. However, obtaining large data sets with UGC and their normalized version is not trivial, especially for languages other than English. In this paper, we explore how to overcome this data bottleneck for Dutch, a low-resource language. We start off with a publicly available tiny parallel Dutch data set comprising three UGC genres and compare two different approaches. The first is to manually normalize and add training data, a money and time-consuming task. The second approach is a set of data augmentation techniques which increase data size by converting existing resources into synthesized non-standard forms. Our results reveal that a combination of both approaches leads to the best results.

Highlights

  • Social media text are considered important language resources for several NLP tasks (Van Hee et al, 2017; Pinto et al, 2016; Zhu et al, 2014)

  • Social media texts are considered a type of written usergenerated content (UGC) in which several language variations can be found as people often tend to write as they speak and/or write as fast as possible (Vandekerckhove and Nobels, 2010)

  • Our results reveal that the different setups resolve most of the normalization issues and that automatic data augmentation mainly helps to reduce the number of over-generalizations produced by the Neural MT (NMT) approach

Read more

Summary

Introduction

Social media text are considered important language resources for several NLP tasks (Van Hee et al, 2017; Pinto et al, 2016; Zhu et al, 2014). One of their most persistent characteristics is the use non-standard words. It is typical to express emotions by the use of symbols or lexical variation. This can be done in the form of the repetition of characters or flooding (wooooow), capitalization (YEY!), and the productive use of emoticons. The use of homophonous graphemic variants of a word, abbreviations, spelling mistakes or letter transpositions are used regularly (Eisenstein et al, 2014)

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.