Abstract

Abstract The advent of ChatGPT, a novel AI-powered language model able to create grammatically accurate and coherent texts, has generated considerable concern among educationalists anxious about its potential to enable cheating among students and to undermine the development of critical thinking, problem-solving, and literacy skills. The similarities and differences between ChatGPT texts and human writing, however, remain underexplored. This study aims to bridge this gap by comparing the use of 3-word bundles in A-level argumentative essays written by British students with those generated by ChatGPT. Our findings show that ChatGPT essays contain a lower frequency of bundles but these have a higher type/token ratio, suggesting that its bundles are more rigid and formulaic. We also found noun and preposition-based bundles are more prevalent in ChatGPT texts, employed for abstract descriptions and to provide transitional and structuring cues. Student essays are characterized by more epistemic stances and authorial presence, crucial in persuasive argumentation. We attribute these distinct patterns in ChatGPT’s output to its processing of vast training data and underlying statistical algorithms. The study points to pedagogical implications for incorporating ChatGPT in writing instruction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.