Abstract

Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an agent. This question serves as a framework for the subsequent ethical analysis of CAI focusing on topics of (self-) knowledge, (self-)understanding, and relationships. Second, we propose further conceptual and ethical analysis regarding human-AI interaction and argue that CAI cannot be considered as an equal partner in a conversation as is the case with a human therapist. Instead, CAI’s role in a conversation should be restricted to specific functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call