Abstract

In recent years, there has been a growth in the use of rater cognitive data to inform test development and validity arguments. In this study, we examined differences in feature attention and categorisation between experienced and inexperienced raters for a college-level assessment of oral communication. The focus was two-fold: (a) rater cognition as it informs rubric development; and (b) impact of verbal protocols on data collection in a speech scoring context. The findings of the study indicate that the complexity of the scoring rubric may be as much of a threat to scoring consistency as raters’ pre-existing cognitive frameworks. A next step in the development of this tool is to simplify the rubric by identifying the critical features that most clearly define the construct, and modify training to address the cognitive influences raters bring to scoring. The results did not indicate a verbal report main effect on scoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call