Abstract

New large language models (LLMs) like ChatGPT have the potential to change qualitative research by contributing to every stage of the research process from generating interview questions to structuring research publications. However, it is far from clear whether such ‘assistance’ will enable or deskill and eventually displace the qualitative researcher. This paper sets out to explore the implications for qualitative research of the recently emerged capabilities of LLMs; how they have acquired their seemingly ‘human-like’ capabilities to ‘converse’ with us humans, and in what ways these capabilities are deceptive or misleading. Building on a comparison of the different ‘trainings’ of humans and LLMs, the paper first traces the seemingly human-like qualities of the LLM to the human proclivity to project communicative intent into or onto LLMs’ purely imitative capacity to predict the structure of human communication. It then goes on to detail the ways in which such human-like communication is deceptive and misleading in relation to the absolute ‘certainty’ with which LLMs ‘converse’, their intrinsic tendencies to ‘hallucination’ and ‘sycophancy’, the narrow conception of ‘artificial intelligence’, LLMs’ complete lack of ethical sensibility or capacity for responsibility, and finally the feared danger of an ‘emergence’ of ‘human-competitive’ or ‘superhuman’ LLM capabilities. The paper concludes by noting the potential dangers of the widespread use of LLMs as ‘mediators’ of human self-understanding and culture. A postscript offers a brief reflection on what only humans can do as qualitative researchers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call