Abstract

One of the potential benefits of AI is that it may allow optimizing teachers' tasks, enabling them to be more efficient in their work. This study aimed to analyze potential differences between the evaluations given by pre-service teachers and the evaluations given by different generative AIs. A total of 507 pre-service teachers participated, who were provided with a rubric to evaluate 12 texts of different types and qualities. The results showed how the performance of generative AIs in evaluating written tasks replicated the functioning of pre-service teachers quite accurately, with ChatGPT being the AI that best replicated the behavior of pre-service teachers, with an accuracy close to 70% of the evaluation provided by humans. Similarly, there were minimal differences in the evaluations given by pre-service teachers based on their gender and academic year. Likewise, generative AI overestimated scores provided for texts. However, this overestimation decreased as the performance of pre-service teachers improved. Thus, assessments provided by high-performing pre-service teachers were more aligned with those provided by generative AI compared to students with lower performance. These results are useful as they highlight how generative AI could be an aid tool guiding the pedagogical knowledge of pre-service teachers in digital assessment tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.