Abstract

The integration of Artificial Intelligence (AI) technologies in the field of education has prompted significant advancements, particularly in the domain of assessment and grading. This research delves into the potential of large language models, specifically OpenAI's ChatGPT, in simulating human-like interactions and accurately grading student assessments. To accomplish its objectives, the study compares the grading performance of ChatGPT with that of human correctors in a sample of second-year university students. The research findings indicate only a moderate correlation between the grades assigned by ChatGPT and those of human correctors, suggesting nuanced capabilities in providing comprehensive feedback and streamlining the grading process. While the study highlights the benefits of AI integration in education, it also addresses potential risks, including the exacerbation of educational inequalities and the limitations associated with AI's automated nature. This research contributes to the ongoing discourse surrounding AI's role in education, emphasizing the importance of striking a balance between AI and human instruction for optimal educational outcomes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call