Abstract
ABSTRACT Three within-subject experiments were conducted by providing students with answers to content questions across different subject matters (a definition, explanation, and example) offered by a human professor (subject-matter expert) versus generative artificial intelligence (ChatGPT). In a randomized order, students read both the expert and ChatGPT’s responses (both were de-identified and declared to be “professors,” so students were not aware one was artificial intelligence), rated both explanations on teaching clarity and competence, and then reported on their affect toward the content and situational interest. Study 1 (interpersonal communication content) revealed no significant differences in repeated measure ratings comparing the expert versus ChatGPT. However, in Study 2 (business communication content) and Study 3 (instructional communication content), compared with the expert, ChatGPT (impersonating a professor) was rated by the same students as higher in teaching clarity and competence, and it generated more student affect and situational interest. In Study 2 and Study 3, a within-subjects mediation analysis revealed that ChatGPT generated more student affect toward the content through the clarity in responses it provided to students.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.