Abstract

AbstractSocial media platforms make choices about what content is and is not permissible on their platforms. For example, choices about if and how to deal with online harassment and hate speech are growing problems in many online settings. But these choices are often opaque, can vary from platform to platform, and can change over time with little notice. This study examines the ways Facebook, Twitter, and Reddit have defined harassment and hate speech, as well as who they frame as responsible for dealing with harassment and hate speech over time. Using content analysis, the policy structures that house relevant policies, the policy documents themselves, and blog posts are examined. The results illustrate a phased approach to defining harassment and hate, which has become increasingly complex and nuanced over time. Additionally, this work shows a compounding view of who is responsible, which began with users but over time has come to include the platform itself, technology, and external actors such as civil society groups. This paper highlights continued opacity and increasing complexity while also providing contextual historical information necessary for both future research and platform governance decisions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call