Abstract

This study uses a combination of machine-learning and fine-tuned qualitative analysis to explore the online disinhibition effect in Twitter-based discourse around #BlackLivesMatter. Our analysis shows that uncivil tweets in the nonmobile dataset are twice as likely to be overtly racist and challenge Black Lives Matter. And in both nonmobile and mobile tweets, uncivil language is deployed in a variety of ways that are sometimes consistent with how we understand the online disinhibition effect, but sometimes not. As such, this study advances our understanding of instances where actors indulge in the more harmful forms of disinhibition such as personal attacks, vulgar language, and hate speech, which are often termed “toxic disinhibition.” In sum, these findings add nuance to the way we understand the online disinhibition effect and responds to a vital gap in the existing body of knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call