This study aimed to develop and revise an analytic rubric based on the results of a survey of the difficulties experienced by raters when assessing the English summaries of Japanese English learners, as reported in a previous study (Kato, in press). In this study, three raters repeatedly discussed and established four categories: “Integration,” “Language Use,” “Paraphrasing,” and “Content Accuracy.” They created descriptors for each category from 0 to 5. The three raters evaluated the summaries of 20 Japanese university students for two types of original English texts. The results showed that the inter-rater reliability of Texts A and B was sufficiently high in all categories. Moreover, the correlation between categories was confirmed to measure independent constructs. This indicated that the analytic rubric created, based on Kato’s (in press) survey report from the raters, is also useful in terms of reliability and validity. This study proposed an analytic rubric that can serve as a foundation for constructing a more user-friendly rubric for future raters.
Read full abstract