Abstract

AbstractThe use of mixed‐format tests made up of multiple‐choice (MC) items and constructed response (CR) items is popular in large‐scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district‐ and state‐level assessments in the United States. Rater effects, or raters’ scoring tendencies that result in performances receiving different scores than are warranted given their quality, are concerns for the interpretation of scores on CR items. However, there are few published studies in which researchers have systematically considered the impact of ignoring rater effects when they are present on estimates of student ability using large‐scale mixed‐format assessments. Using results from an analysis of NAEP data, we systematically explored the impacts of rater effects on student achievement estimates. Our results suggest that in conditions that reflect many large‐scale mixed‐format assessments, directly modeling rater effects yields more accurate student achievement estimates than estimation procedures that do not incorporate raters. We consider the implications of our findings for research and practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call