Abstract

Shadowbanning is a unique content moderation strategy receiving recent media attention for the ways it impacts marginalized social media users and communities. Social media companies often deny this content moderation practice despite user experiences online. In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning, develop folk theories about shadowbanning, and attempt to prove its occurrence. We find that marginalized social media users collaboratively develop and test algorithmic folk theories to make sense of their unclear experiences with shadowbanning. Participants reported direct consequences of shadowbanning, including frustration, decreased engagement, the inability to post specific content, and potential financial implications. They reported holding negative perceptions of platforms where they experienced shadowbanning, sometimes attributing their shadowbans to platforms' deliberate suppression of marginalized users' content. Some marginalized social media users acted on their theories by adapting their social media behavior to avoid potential shadowbans. We contributecollaborative algorithm investigation : a new concept describing social media users' strategies of collaboratively developing and testing algorithmic folk theories. Finally, we present design and policy recommendations for addressing shadowbanning and its potential harms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call