Abstract

Twitter being among the popular social media platforms, provide peoples’ opinions regarding specific ideas, products, services, etc. The large amounts of shared data as tweets can help extract users’ sentiment and provide valuable feedback to improve the quality of products and services alike. Similar to other service industries, the airline industry utilizes such feedback for determining customers’ satisfaction levels and improving the quality of experience where needed. This, of course, requires accurate sentiments from the user tweets. Existing sentiment analysis models suffer from low accuracy on account of the contradictions found in the tweet text and the assigned label. From this perspective, this study proposes a hybrid sentiment analysis approach where the lexicon-based methods are used with deep learning models to improve sentiment accuracy. Experiments involve analyzing the impact of TextBlob on the classification accuracy of models as against the original annotations, considering that the probability of the false annotations cannot be overlooked. Furthermore, the efficacy of TextBlob against Afinn and VADER (Valence Aware Dictionary for Sentiment Reasoning) is also evaluated. The CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit), and CNN-LSTM are deployed in comparison with state-of-the-art machine learning models. Additionally, the efficiency and efficacy of TF-IDF (Term Frequency-Inverse Document Frequency) and BoW (Bag of Words) are also investigated. Results suggest that models perform better when trained using the TextBlob assigned sentiments as compared to the original sentiments in the dataset. LSTM-GRU outperforms all models and previous studies with the highest 0.97 accuracy and 0.96 F1 scores. From machine learning models, the support vector classifier and extra tree classifier achieve the highest accuracy score of 0.92, with TF-IDF and BoW, respectively. Despite the good performance of the models using the TextBlob labels, TextBlob-based annotation cannot replace humans. Our stance is that with humans, bias, error-proneness, and subjectivity cannot be ignored; so we propose that the TextBlob-annotated labels can be used as assistance for human annotators where human annotators can wet the TextBlob-annotated dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call