Abstract

Online social networks can be used for mental healthcare monitoring using Artificial Intelligence and Machine Learning techniques for detecting various mental health disorders and corresponding risk assessment. Recent research in this domain has primarily been focused on leveraging deep neural networks and various Transformer based Large Language Models, which have now become state-of-the-art for most natural language processing and computational linguistic tasks due to their unmatched prediction accuracy. Unlike conventional machine learning algorithms, these deep neural networks are black box architectures, where it is difficult to interpret and explain their predicted outcome. However, a black box classification outcome is insufficient for healthcare applications. Such systems will not be widely adopted and trusted by healthcare practitioners if they are not able to understand and explain the reasoning behind the predicted decisions made by an AI and ML based healthcare diagnostic system. The key objective of our research is to demonstrate the applications of model agnostic, post hoc surrogate XAI techniques for providing explainability to classification decisions of pretrained LLMs (Transformers) based mental healthcare diagnostic systems fine-tuned (or trained) to detect depressive and suicidal behavior using UGC from online social networks. For this, we have used the two most recent and popular techniques, SHAP and LIME. We have conducted extensive and in-depth experiments with four datasets and six pretrained LLMs, three of which have already been domain-adapted using mental health related datasets. We have also performed Few Shot Learning experiments with these three pretrained mental health domain-adapted LLMs. The results of qualitative and descriptive data analysis in this paper demonstrate that in order to build a comprehensive understanding of a person’s psychological state, emotion, and behavior and to discover the causes, symptoms, and triggers of mental health issues, it is essential to utilize eXplAInable (XAI) techniques with Transformer based LLMs (supervised). Alternatively, Transformer based unsupervised topic modeling technique BERTopic may be used for mental health risk monitoring and cause or symptom extraction when supervised training of LLMs is not feasible due to dataset annotation or availability challenges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call