Abstract

The COVID-19 pandemic has intensified the need for mental health support across the whole spectrum of the population. Where global demand outweighs the supply of mental health services, established interventions such as cognitive behavioural therapy (CBT) have been adapted from traditional face-to-face interaction to technology-assisted formats. One such notable development is the emergence of Artificially Intelligent (AI) conversational agents for psychotherapy. Pre-pandemic, these adaptations had demonstrated some positive results; but they also generated debate due to a number of ethical and societal challenges. This article commences with a critical overview of both positive and negative aspects concerning the role of AI-CBT in its present form. Thereafter, an ethical framework is applied with reference to the themes of (1) beneficence, (2) non-maleficence, (3) autonomy, (4) justice, and (5) explicability. These themes are then discussed in terms of practical recommendations for future developments. Although automated versions of therapeutic support may be of appeal during times of global crises, ethical thinking should be at the core of AI-CBT design, in addition to guiding research, policy, and real-world implementation as the world considers post-COVID-19 society.

Highlights

  • The unprecedented global crisis has intensified and diversified private distress sources, making evident the need for broader access to psychological support [1]

  • Building on lessons from positive and negative developments, we discuss a set of ethical considerations for chatbots and conversational agents for mental health, for the openly available commercial applications of cognitive behavioural therapy (CBT) that assume no presence of a human therapist

  • We found pertinence in the principles of beneficence, non-maleficence, autonomy, justice, and explicability—previously used in a typology for Artificially Intelligent (AI)-ethics in general [29]; and in the structure of findings from a systematic review of machine learning for mental health [30]

Read more

Summary

INTRODUCTION

The unprecedented global crisis has intensified and diversified private distress sources, making evident the need for broader access to psychological support [1]. Automated conversational agents and chatbots are increasingly promoted as potentially efficient emotional support tools for larger population segments during the pandemic [7] and afterwards [8] It is over 50 years since ELIZA was created [9], the first computer programme to use pattern matching algorithms to mimic human-therapist interactions by mechanically connecting end-user inputs to answers from a pre-defined set of responses. Building on lessons from positive and negative developments, we discuss a set of ethical considerations for chatbots and conversational agents for mental health, for the openly available commercial applications of cognitive behavioural therapy (CBT) that assume no presence of a human therapist. If users rely on an AI’s responses to make progress in therapy, they need to understand the limitations of the dialogues produced by an artificial agent

DISCUSSION
Limitations and Future
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.