AI-enabled chatbots intended to build social relations with humans are becoming increasingly common in the marketplace, with millions of registered users using these chatbots as virtual companions or therapists. These chatbots make use of what is often called the “Eliza effect”—the tendency of human users to attribute human-like knowledge and understanding to a computer program. A common interpretation of this phenomenon is to consider this form of relating in terms of delusion, error, or deception, where the user misunderstands or forgets they are talking to a computer. As an alternative, we draw on the work of feminist Science and Technology Studies scholars as providing a robust and capacious tradition of thinking and engaging with human–nonhuman relationships in non-reductive ways. We closely analyze two different stories about encounters with chatbots, taking up the feminist STS challenge to attend to the agency of significant otherness in the encounter. The first is Joseph Weizenbaum's story about rejecting the ELIZA chatbot technology he designed to mimic a therapist as a monstrosity, based on his experiences watching others engage with it. The second is a story about Julie, who experiences a mental health crisis, and her chatbot Navi, as told through her descriptions of her experiences with Navi in the recent podcast Radiotopia presents: Bot Love. We argue that a reactionary humanist narrative, as presented by Weizenbaum, is incapable of attending to the possibilities of pleasure, play, or even healing that might occur in human–chatbot relatings. Other forms of engaging with, understanding, and making sense of this new technology and its potentialities are needed both in research and mental health practice, particularly as more and more patients will begin to use these technologies alongside engaging in traditional human-led psychotherapy.
Read full abstract