Abstract

Suicide is a leading cause of death in the US. Online posts on social media can reveal valuable information about individuals with suicidal ideation and help prevent tragic outcomes. However, studying suicidality through online posts is challenging, as people may not be willing to share their thoughts directly due to various psychological and social barriers. Moreover, most of the previous studies focused on evaluating machine learning techniques to detect suicidal posts, rather than exploring the contextual features that are present in them. This study aimed to not only classify the posts based on sentiment analysis, but also to identify suicide-related psychiatric stressors, e.g., family problems or school stress, and examine the contextual features of the posts, especially those that are misclassified. We used two techniques of random forest and Lasso generalized linear models and found that they performed similarly. Our findings suggest that while machine learning algorithms can identify most of the potentially harmful posts, they can also introduce bias, and human intervention is needed to minimize that bias. We argue that some posts may be very difficult or impossible to tag correctly by algorithms alone, and they require human understanding and empathy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.