Abstract

This article describes the empirical evaluation of the reliability and validity of a grading rubric for grading APA-style introductions of undergraduate students. Levels of interrater agreement and intrarater agreement were not extremely high but were similar to values reported in the literature for comparably structured rubrics. Rank-order correlations between graders who used the rubric and an experienced instructor who ranked the papers separately and holistically provided evidence for the rubric's validity. Although this rubric has utility as an instructional tool, the data underscore the seemingly unavoidable subjectivity inherent in grading student writing. Instructors are cautioned that merely using an explicit, carefully developed rubric does not guarantee high reliability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call