Abstract

Measuring the ability of teachers is part of the evaluation in the national education system. The development of the instrument in the form of essay questions has become an alternative because the instrument is considered to have better authenticity than the form of multiple choice questions. The implications of using the essay instrument were scored consistency and efficient resources. The purpose of this study was to examine the inter-rater reliability between the automated essay scoring system and human raters in measuring teachers’ knowledge using the essay instrument. The research was conducted randomly with 200 junior high school science teachers. The quantitative method design was applied to investigate the intra-class correlation coefficient (ICC) and Pearson's correlation (r) as indicators of the ability of automated essay scoring (UKARA) that had been used. The main data in this study were test answers from participants in the form of limited essay answers that have been distributed online. The inter-rater reliability coefficient between the UKARA and the human rater in this study was in the high category (more than 0.7) for all items or means that the score given by UKARA has a strong correlation with the score given by human rater. These results mean that UKARA has adequate capability as an automated essay scoring system on the measurement of science teacher knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call