Abstract
Through ChatGPT and other artificial intelligence platforms, large language models’ (LLMs’) applications have expanded across daily life, including research. Yet, the qualitative paradigm's methodological assumption of socially mediated interpretivism is at odds with how LLMs operate. Qualitative research is appropriate for inquiry where conceptual development and interpretation are required. Specifically, Disability Studies scholars have used qualitative research to understand more about disabled peoples’ experiences. Like other marginalized identities, disability is often misunderstood pejoratively. We offer concerns about whether LLMs can address key qualitative analysis markers, contextualizing these concerns within disability research. To test these concerns, we assigned two LLMs, ChatGPT and Gemini, coding tasks using existing secondary de-identified data. We found LLMs were not able to produce codes that were high quality, credible, or consistent, and could not parse data from research participants with certain disabilities. We discuss implications for future methodological decisions and policies.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have