Abstract

Progress in understanding students’ development of psychological literacy is critical. However, generative AI represents an emerging threat to higher education which may dramatically impact on student learning and how this learning transfers to their practice. This research investigated whether ChatGPT responded in ways that demonstrated psychological literacy and whether it matched the responses of subject matter experts (SMEs) on a measure of psychological literacy. We tasked ChatGPT with providing responses to 13 psychology research methods scenarios as well as to rate each of the five response options that were already developed for each scenario by the research team. ChatGPT responded in ways that would typically be regarded as displaying a high level of psychological literacy. The response options which were previously rated by two groups of SMEs were then compared with ratings provided by ChatGPT. The Pearson's correlations were very high ( r's  = .73 and .80, respectively), as were the Spearman's rhos (rho's = .81 and .82, respectively). Kendall's tau were also quite high (tau's = .67 and .68, respectively). We conclude that ChatGPT may generate responses that match SME psychological literacy in research methods, which could also generalise across multiple domains of psychological literacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call