Abstract

BackgroundMonitoring linguistic cues from adolescents’ digital media use (DMU; ie, digital content transmitted on the web, such as through text messages or social media) that could denote suicidal risk offers a unique opportunity to protect adolescents vulnerable to suicide, the second leading cause of death among youth. Adolescents communicate through digital media in high volumes and frequently express emotionality. In fact, web-based disclosures of suicidality are more common than in-person disclosures. The use of automated methods of digital media monitoring triggered by a natural language processing algorithm offers the potential to detect suicidal risk from subtle linguistic units (eg, negatively valanced words, phrases, or emoticons known to be associated with suicidality) present within adolescents’ digital media content and to use this information to respond to alerts of suicidal risk. Critical to the implementation of such an approach is the consideration of its acceptability in the clinical care of adolescents at high risk of suicide.ObjectiveThrough data collection among recently suicidal adolescents, parents, and clinicians, this study examines the current context of digital media monitoring for suicidal adolescents seeking clinical care to inform the need for automated monitoring and the factors that influence the acceptance of automated monitoring of suicidal adolescents’ DMU within clinical care.MethodsA total of 15 recently suicidal adolescents (aged 13-17 years), 12 parents, and 10 clinicians participated in focus groups, qualitative interviews, and a group discussion, respectively. Data were recorded, transcribed, and analyzed using thematic analysis.ResultsParticipants described important challenges to the current strategies for monitoring the DMU of suicidal youth. They felt that automated monitoring would have advantages over current monitoring approaches, namely, by protecting web-based environments and aiding adolescent disclosure and support seeking about web-based suicidal risk communication, which may otherwise go unnoticed. However, they identified barriers that could impede implementation within clinical care, namely, adolescents’ and parents’ concerns about unintended consequences of automated monitoring, that is, the potential for loss of privacy or false alerts, and clinicians’ concerns about liability to respond to alerts of suicidal risk. On the basis of the needs and preferences of adolescents, parents, and clinicians, a model for automated digital media monitoring is presented that aims to optimize acceptability within clinical care for suicidal youth.ConclusionsAutomated digital media monitoring offers a promising means to augment detection and response to suicidal risk within the clinical care of suicidal youth when strategies that address the preferences of adolescents, parents, and clinicians are in place.

Highlights

  • BackgroundThere is growing public health interest in strategies for monitoring the digital media use (DMU) of youth at risk for suicide, the second leading cause of death among adolescents [1]

  • Participants described important challenges to the current strategies for monitoring the DMU of suicidal youth. They felt that automated monitoring would have advantages over current monitoring approaches, namely, by protecting web-based environments and aiding adolescent disclosure and support seeking about web-based suicidal risk communication, which may otherwise go unnoticed

  • Others did not take a moral stance to monitoring, they reported valuing the ability to autonomously identify with a group of like-minded others on digital media

Read more

Summary

Introduction

BackgroundThere is growing public health interest in strategies for monitoring the digital media use (DMU) of youth at risk for suicide, the second leading cause of death among adolescents [1]. There is potential for greater vulnerability to negative digital media experiences, there are currently no evidence-based approaches for monitoring the DMU of youth at risk for suicide. Monitoring linguistic cues from adolescents’ digital media use (DMU; ie, digital content transmitted on the web, such as through text messages or social media) that could denote suicidal risk offers a unique opportunity to protect adolescents vulnerable to suicide, the second leading cause of death among youth. The use of automated methods of digital media monitoring triggered by a natural language processing algorithm offers the potential to detect suicidal risk from subtle linguistic units (eg, negatively valanced words, phrases, or emoticons known to be associated with suicidality) present within adolescents’ digital media content and to use this information to respond to alerts of suicidal risk. Critical to the implementation of such an approach is the consideration of its acceptability in the clinical care of adolescents at high risk of suicide

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call