Abstract

Sentiment analysis (SA) in social networks is an important research area. In Twitter, popular information that is either facts or opinions is propagated throughout the network. Unlike normal documents, the restricted length of the messages along with constant changing internet slangs on Twitter had become a challenge for researchers. Bidirectional Encoder Representations from Transformers (BERT) represents the latest technology of pre-trained language models which have recently advanced a wide range of natural language processing (NLP) tasks. This paper aims to develop a proposed model using BERT for sentiment analysis tasks. This paper fine-tunes BERT with a single layer added on the top for the SA task. With fully connected neural network reaching state of the art results using simple tips preventing it from over-fitting and enabling it to be fine-tuned easily on such downstream task. The proposed method is implemented and few sets of features are tested. The model model showed that usage of stop-words inside the dataset does hinder the performance of the classifier and that by separating the positive tweets and negative tweets to extract the topics and the keywords, it achieves a better result than combining the positive and negative tweets together. Experimental results also showed the change in model performance when k random topics are selected out of a total of t topics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call