Abstract
Powerful yet simple augmentation techniques have significantly helped modern deep learning-based text classifiers to become more robust in recent years. Although these augmentation methods have proven to be effective, they often utilize random or non-contextualized operations to generate new data. In this work, we modify a specific augmentation method called Easy Data Augmentation or EDA with more sophisticated text editing operations powered by masked language models such as BERT and RoBERTa to analyze the benefits or setbacks of creating more linguistically meaningful and hopefully higher quality augmentations. Our analysis demonstrates that using a masked language model for word insertion almost always achieves better results than the initial method but it comes at a cost of more time and resources which can be comparatively remedied by deploying a lighter and smaller language model like DistilBERT.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.