Abstract

Interim and summative assessments often are used to make decisions about student writing skills and needs for instruction, but the extent to which different raters and score types might introduce bias for some groups of students is largely unknown. To evaluate this possibility, we analyzed interim writing assessments and state summative test data for 2,621 students in Grades 3-11. Both teachers familiar with students and researchers unaware of students' identifying characteristics evaluated the interim assessments with analytic rubrics. Teachers assigned higher scores on the interim assessments than researchers. Female students had higher scores than males, and English learners (ELs), students eligible for free or reduced-price school lunch (FRL), and students eligible for special education (SPED) had lower scores than other students. These differences were smaller with researcher compared to teacher ratings. Across grade levels, interim assessment scores were similarly predictive of state rubric scores, scale scores, and proficiency designations across student groups. However, students identified as Hispanic, FRL, EL, or SPED had lower scale scores and a lower likelihood of reaching proficiency on the state exam. For this reason, these students' risk of unsuccessful performance on the state exam would be greater than predicted when based on interim assessment scores. These findings highlight the potential importance of masking student identities when evaluating writing to reduce scoring bias and suggest that the written composition portions of high-stakes writing examinations may be less biased against historically marginalized groups than the multiple choice portions of these exams. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call