Abstract

ABSTRACTOnline nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call