Recent advancements in artificial intelligence (AI) have led to concerns about its potential misuse in education. As large language models (LLMs) such as ChatGPT and Bard can generate human-like text, researchers and educators noted the potential redundancy of tasking students with writing academic essays. We aimed to explore if the two LLMs could generate unstructured essays on medical students’ personal experiences of challenges and ethical dilemmas that are indistinguishable from human-written texts. We collected 47 original student-written essays from which we extracted keywords to develop prompts for the LLMs. We then used these prompts to generate an equivalent number of essays using ChatGPT and Bard. We analysed the essays using the Language Inquiry and Word Count (LIWC) 22 software, extracting the main LIWC summary measures and variables related to social and psychological processes. We also conducted sub-analyses for sixteen student essays that were presumably written entirely or in part by AI, according to two AI detectors. We found that AI-written essays used more language related to affect, authenticity, and analytical thinking compared to student-written essays after we removed AI-co-written student essays from the analysis. We observed that, despite the differences in language characteristics compared to student-written essays, both LLMs are highly effective in generating essays on students’ personal experiences and opinions regarding real-life ethical dilemmas.
Read full abstract