Abstract. Sentiment evaluation plays a crucial role in deciphering public perception and consumer responses in today's digital landscape. This investigation offers a thorough assessment of diverse sentiment evaluation techniques, contrasting conventional machine learning methodologies with cutting-edge deep learning frameworks. In particular, the research scrutinizes the efficacy of Bidirectional Encoder Representations from Transformers (BERT)-derived architectures (BERT-Base and Robustly Optimized BERT Pretraining Approach (RoBERTa)), Convolutional Neural Networks (CNN), Long Short-Term Memory Networks (LSTM), Support Vector Machines (SVM), and Naive Bayes classifiers. The study gauges these approaches based on their precision, recall, F1-metric, overall accuracy, and computational efficiency using an extensive sentiment evaluation dataset. The results reveal that BERT-based models, particularly RoBERTa, achieve the highest accuracy (87.44%) and F1-score (0.8746), though they also require the longest training time (approximately 3 hours). CNN and LSTM models strike a balance between performance and efficiency, while traditional methods like SVM and Naive Bayes offer faster training and deployment with moderate accuracy. The insights gained from this study are valuable for both researchers and practitioners, highlighting the trade-offs between model performance, computational demands, and practical deployment considerations in sentiment analysis applications.
Read full abstract