Abstract

Humans are increasingly integrated with devices that enable the collection of vast unstructured opinionated data. Accurately analysing subjective information from this data is the task of sentiment analysis (an actively researched area in NLP). Deep learning provides a diverse selection of architectures to model sentiment analysis tasks and has surpassed other machine learning methods as the foremast approach for performing sentiment analysis tasks. Recent developments in deep learning architectures represent a shift away from Recurrent and Convolutional neural networks and the increasing adoption of Transformer language models. Utilising pre-trained Transformer language models to transfer knowledge to downstream tasks has been a breakthrough in NLP. This survey applies a task-oriented taxonomy to recent trends in architectures with a focus on the theory, design and implementation. To the best of our knowledge, this is the only survey to cover state-of-the-art Transformer-based language models and their performance on the most widely used benchmark datasets. This survey paper provides a discussion of the open challenges in NLP and sentiment analysis. The survey covers five years from 1st July 2017 to 1st July 2022.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call