Abstract

BackgroundHealth-related stigma can act as a barrier to seeking treatment and can negatively impact wellbeing. Comparing stigma communication across different conditions may generate insights previously lacking from condition-specific approaches and help to broaden our understanding of health stigma as a whole.MethodA sequential explanatory mixed-methods approach was used to investigate the prevalence and type of health-related stigma on Twitter by extracting 1.8 million tweets referring to five potentially stigmatized health conditions and disorders (PSHCDs): Human Immunodeficiency Virus (HIV)/Acquired Immunodeficiency Syndrome (AIDS), Diabetes, Eating Disorders, Alcoholism, and Substance Use Disorders (SUD). Firstly, 1,500 tweets were manually coded by stigma communication type, followed by a larger sentiment analysis (n = 250,000). Finally, the most prevalent category of tweets, “Anti-Stigma and Advice” (n = 273), was thematically analyzed to contextualize and explain its prevalence.ResultsWe found differences in stigma communication between PSHCDs. Tweets referring to substance use disorders were frequently accompanied by messages of societal peril. Whereas, HIV/AIDS related tweets were most associated with potential labels of stigma communication. We found consistencies between automatic tools for sentiment analysis and manual coding of stigma communication. Finally, the themes identified by our thematic analysis of anti-stigma and advice were Social Understanding, Need for Change, Encouragement and Support, and Information and Advice.ConclusionsDespite one third of health-related tweets being manually coded as potentially stigmatizing, the notable presence of anti-stigma suggests that efforts are being made by users to counter online health stigma. The negative sentiment and societal peril associated with substance use disorders reflects recent suggestions that, though attitudes have improved toward physical diseases in recent years, stigma around addiction has seen little decline. Finally, consistencies between our manual coding and automatic tools for identifying language features of harmful content, suggest that machine learning approaches may be a reasonable next step for identifying general health-related stigma online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call