Abstract
Across commercial social media platforms and dedicated support forums alike, mental health content raises important questions about what constitutes risk and harm online, and how automated and human moderation practices can be re-configured to accommodate resilient behaviours and social support. In work with three Australian mental health organisations that provide successful discussion and support forums, this paper identifies moderation practices that can help to re-think how mental health content is managed. The work aims to improve safety and resilience in these spaces, drawing insights from successful practices to inform algorithmic and moderator treatment of mental health content more widely across social media. Through an analysis of interviews and workshops with forum managers and moderators, I paper argue that platforms must incorporate strengths-based context (resilience indicators) into their moderation systems and practices, challenging simplistic assessments of mental health content as risk and harm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.