Abstract

Online harassment is a public health concern, and social media algorithms are often proposed as a solution by social media companies. As online harassment grows, there are concerns that algorithms as content moderators fail to achieve their desired effect because of inabilities to contextualize social issues. This research contributes to the intersection of algorithms and online harassment by investigating the algorithmic folk theories of the victims, perpetrators, and bystanders of online harassment. Strategically sampling the experiences of marginalized identity categories who experienced harassment, we conducted grounded theory interviews and found that people theorize that algorithmic failures fuel online harassment and isolate victims. We describe four folk theories that victims, perpetrators, and witnesses utilize to make sense of their experiences of online harassment. The $2 asserts algorithms only pay attention to harassment incidents with a large number of flags. $2 describes perceptions of how algorithms amplify harassment content to increase engagement. $2 refers to perceptions that algorithms seek to form new audiences for content, which networks harassers together. $2 finds victims perceive that algorithms fail to contextualize the harassment of marginalized communities. Victims, bystanders, and perpetrators each described using their folk theories to instigate, push back, or succumb to the culture of online harassment. Understanding these algorithmic online harassment folk theories highlights how social media algorithms perpetuate harassment and fail to support victims. $2 : algorithmic folk theories, online harassment, networked harassment, social media, content moderation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call