Abstract
ABSTRACT AI-enabled assessment of student papers has the potential to provide both summative and formative feedback and reduce the time spent on grading. Using auto-ethnography, this study compares AI-enabled and human assessment of business student examination papers in a law module based on previously established rubrics. Examination papers were corrected by the professor and then subjected to a series of tests by Gen-AI tools. While we were impressed with the personalised feedback of Gen-AI tools, the accuracy of grading and the learning capacity of AI tools, we found that Gen-AI tools used are not fully satisfactory to enable fully autonomous correction due to erroneous grading, the hallucination phenomenon and verbose feedback that is not always personalised. The 8C model of challenges of AI-enabled correction is outlined. This paper has implications for professors, HEIs and instructional designers and all those who correct student papers in a third-level institution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Innovations in Education and Teaching International
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.