Abstract
BackgroundResearch in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed.ObjectiveThis paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI.MethodsBased on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy. To enable a comprehensive evaluation, the analysis was structured around the following three steps: assessment of potential benefits; analysis of overarching ethical issues and concerns; discussion of specific ethical and social issues of the interventions.ResultsFrom an ethical perspective, important benefits of embodied AI applications in mental health include new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, and freeing up time for physicians. Overarching ethical issues and concerns include: harm prevention and various questions of data ethics; a lack of guidance on development of AI applications, their clinical integration and training of health professionals; ‘gaps’ in ethical and regulatory frameworks; the potential for misuse including using the technologies to replace established services, thereby potentially exacerbating existing health inequalities. Specific challenges identified and discussed in the application of embodied AI include: matters of risk-assessment, referrals, and supervision; the need to respect and protect patient autonomy; the role of non-human therapy; transparency in the use of algorithms; and specific concerns regarding long-term effects of these applications on understandings of illness and the human condition.ConclusionsWe argue that embodied AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care. We conclude by indicating areas of future research and developing recommendations for high-priority areas in need of concrete ethical guidance.
Highlights
Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services, that is, in psychiatry, psychology, and psychotherapy
Based on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy
We argue that embodied AI is a promising approach across the field of mental health; further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care
Summary
Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services, that is, in psychiatry, psychology, and psychotherapy. Artificially intelligent virtual and robotic agents are available for relatively low-level elements of mental health support, such as comfort or social interaction, and perform high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals such as psychotherapists [4] Such ‘virtual’ or ‘robotic therapists’ include an artificially intelligent algorithm that responds independently of any expert human guidance to the client or patient through a virtually embodied presence, such as a face icon, or a physically embodied presence, such as a robotic interface. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have