Running Records are thought to be an excellent formative assessment tool because they generate results that educators can use to make their teaching more responsive. Despite the technical nature of scoring Running Records and the kinds of important decisions that are attached to their analysis, few studies have investigated assessor accuracy. We measured precision across 114 teachers who were given a pre-coded Running Record to analyze by comparing their quantification and interpretation of the record against a scoring key. We also used Rasch measurement to examine which items teachers found easy and hard to score accurately. Analyses revealed wide variation in teachers’ accuracy, particularly in interpretation, that, according to item difficulty analysis, were caused by a few specific mistakes in scoring. Implications to improve training so that individuals administer Running Records more reliably are shared.
Read full abstract