Abstract

AbstractFree-text tasks, where students need to write a short answer to a specific question, serve as a well-established method for assessing learner knowledge. To address the high cost of manually scoring these tasks, automated scoring models can be used. Such models come in various types, each with its own strengths and weaknesses. Comparing these models helps in selecting the most suitable one for a given problem. Depending on the assessment context, this decision can be driven by ethical or legal considerations. When implemented successfully, a scoring model has the potential to substantially reduce costs and enhance the reliability of the scoring process. This article compares the different categories of scoring models across a set of crucial criteria that have immediate relevance to model employment in practice.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.