The emergence of artificial intelligence (AI) is transforming how humans live and interact, raising both excitement and concerns—particularly about the potential for AI consciousness. For example, Google engineer Blake Lemoine suggested that the AI chatbot LaMDA might become sentient. At that time, GPT-3 was one of the most powerful publicly available language models, capable of simulating human reasoning to a certain extent. The notion of GPT-3 having some degree of consciousness could be linked to its ability to produce human-like responses, hinting at a basic level of understanding. To explore this further, we administered both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) to GPT-3. Results showed that GPT-3 outperformed average humans on CI tests requiring the use and demonstration of acquired knowledge. However, its logical reasoning and EI capacities matched those of an average human. GPT-3’s self-assessments of CI and EI didn’t always align with its objective performance, with variations comparable to different human subsamples (e.g., high performers, males). A further discussion considered whether these results signal emerging subjectivity and self-awareness in AI. Future research should examine various language models to identify emergent properties of AI. The goal is not to discover machine consciousness itself, but to identify signs of its development, occurring independently of training and fine-tuning processes. If AI is to be further developed and widely deployed in human interactions, creating empathic AI that mimics human behavior is essential. The rapid advancement toward superintelligence requires continuous monitoring of AI’s human-like capabilities, particularly in general-purpose models, to ensure safety and alignment with human values.
Read full abstract