Abstract
The evaluation of student essay corrections has become a focal point in understanding the evolving role of Artificial Intelligence (AI) in education. This study aims to assess the accuracy, efficiency, and cost-effectiveness of ChatGPT's essay correction compared to human correction, with a primary focus on identifying and rectifying grammatical errors, spelling, sentence structure, punctuation, coherence, relevance, essay structure, and clarity. The research involves collecting essays from 100 randomly selected university students, covering diverse themes, with anonymity maintained and no prior corrections by humans or AI. An analysis sheet, outlining linguistic and informational elements for evaluation, serves as a benchmark for assessing the quality of corrections made by ChatGPT and humans. The study reveals that ChatGPT excels in fundamental language mechanics, demonstrating superior performance in areas like grammar, spelling, sentence structure, relevance, and supporting evidence. However, thematic consistency remains an area where human evaluators outperform the AI. The findings emphasize the potential for a balanced approach, leveraging both human and AI strengths, for a comprehensive and effective essay correction process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.