Abstract

ObjectiveSentiment analysis is an important method for understanding emotions and opinions expressed through social media exchanges. Little work has been done to evaluate the performance of existing sentiment analysis tools on social media datasets, particularly those related to health, healthcare, or public health. This study aims to address the gap. Material and methodsWe evaluated 11 commonly used sentiment analysis tools on five health-related social media datasets curated in previously published studies. These datasets include Human Papillomavirus Vaccine, Health Care Reform, COVID-19 Masking, Vitals.com Physician Reviews, and the Breast Cancer Forum from MedHelp.org. For comparison, we also analyzed two non-health datasets based on movie reviews and generic tweets. We conducted a qualitative error analysis on the social media posts that were incorrectly classified by all tools. ResultsThe existing sentiment analysis tools performed poorly with an average weighted F1 score below 0.6. The inter-tool agreement was also low; the average Fleiss Kappa score is 0.066. The qualitative error analysis identified two major causes for misclassification: (1) correct sentiment but on wrong subject(s) and (2) failure to properly interpret inexplicit/indirect sentiment expressions. Discussion and conclusionThe performance of the existing sentiment analysis tools is insufficient to generate accurate sentiment classification results. The low inter-tool agreement suggests that the conclusion of a study could be entirely driven by the idiosyncrasies of the tool selected, rather than by the data. This is very concerning especially if the results may be used to inform important policy decisions such as mask or vaccination mandates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call