Abstract

The Bidirectional Encoder Representations from Transformers (BERT) is a state-of-the-art language model used for multiple natural language processing tasks and sequential modeling applications. The accuracy of predictions from context-based sentiment and analysis of customer review data from various social media platforms are challenging and time-consuming tasks due to the high volumes of unstructured data. In recent years, more research has been conducted based on the recurrent neural network algorithm, Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM) as well as hybrid, neutral, and traditional text classification algorithms. This paper presents our experimental research work to overcome these known challenges of the sentiment analysis models, such as its performance, accuracy, and context-based predictions. We’ve proposed a fine-tuned BERT model to predict customer sentiments through the utilization of customer reviews from Twitter, IMDB Movie Reviews, Yelp, Amazon. In addition, we compared the results of the proposed model with our custom Linear Support Vector Machine (LSVM), fastText, BiLSTM and hybrid fastText-BiLSTM models, as well as presented a comparative analysis dashboard report. This experiment result shows that the proposed model performs better than other models with respect to various performance measures.

Highlights

  • Training and evaluation of the Bidirectional Encoder Representations from Transformers (BERT) model is used through the Google Colaboratory cloud environment, while a standard server is used for training and evaluating the fastText, Linear Support Vector Machine (LSVM) and SA-BLSTM models

  • The models evaluated are based on accuracy, recall, precision and F1 score while the performance measures are calculated based on the True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (TN) matrix

  • The proposed BERT model outperforms in terms of accuracy and model performance compare to other models

Read more

Summary

A Sequence Learning BERT Model for Sentiment Analysis

The accuracy of predictions from contextbased sentiment and analysis of customer review data from various social media platforms are challenging and timeconsuming tasks due to the high volumes of unstructured data. This paper presents our experimental research work to overcome these known challenges of the sentiment analysis models, such as its performance, accuracy, and context-based predictions. We compared the results of the proposed model with our custom Linear Support Vector Machine (LSVM), fastText, BiLSTM and hybrid fastText-BiLSTM models, as well as presented a comparative analysis dashboard report. This experiment result shows that the proposed model performs better than other models with respect to various performance measures

INTRODUCTION
RELATED WORK
Data Pre-processing
Proposed BERT Model
Experimental Environments
LSVM Model
Data Source and Dataset
Performance Measures
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call