Abstract

This comparative study aimed to examine the difference of interrater reliability between two different methods of measurement: essay tests and rubrics scoring. Thirty students and thirty science teachers were participated in this study. The interrater reliability was estimated using Fleiss Kappa. While the hypotheses were tested using Mann-Witney U with Exact to increase data validity. The results of this study showed that interrater reliability of restricted response items was higher than context dependent tasks as well as applied to extended response items scored using the analytic rubric and the holistic rubric. While the interrater reliability of extended response items is higher if its compered to background dependent tasks which was scored using the analytic rubric and the holistic rubric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call