This article poses a simple question: can AI lie? In response to this question, the article examines, as its point of inquiry, popular AI chatbots, such as, ChatGPT. In doing so, an examination of the psychoanalytic, philosophical, and technological significance of AI and its complexities are located in relation to the dynamics of truth, falsity, and deception. That is, by critically considering the chatbot’s ability to engage in natural language conversations and provide contextually relevant responses, it is argued that what separates the AI chatbot from anthropocentric debates, which allude to some form of conscious recognition on behalf of AI, is the importance of the lie – an importance which a psychoanalytic approach can reveal. Indeed, while AI technologies can undoubtedly blur the line between lies and truth-speaking, in the case of the AI chatbot, it is detailed how such technology remains unable to lie authentically or, in other words, is unable to lie like a human. For psychoanalysis, the capacity to lie bears witness to the unconscious and, thus, plays an important role in determining the subject. It is for this reason that rather than uncritically accepting the chatbot’s authority – an authority that is easily reflected in its honest responses and frank admissions – a psychoanalytic (Lacanian) perspective can highlight the significance of the unconscious as a distorting factor in determining the subject. To help elucidate this argument, specific attention is given to introducing and applying Lacan’s subject of enunciation and subject of the enunciated. This is used to assert that what continues (for now) to set us apart from AI technology is not necessarily our ‘better knowledge’ but our capability to consciously engage in acts of falsehood that function to reveal the social nuances and significances of the lie.
Read full abstract