Abstract

States and school districts across the United States are increasingly using value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer learning with school-year gains. These “cross-year” VAMs yield biased estimates of teacher effectiveness when students with different propensities for summer learning are non-randomly assigned to classrooms. We investigate the practical implications of this problem by comparing estimates from “cross-year” VAMs to those from arguably more valid “within-year” VAMs using fall and spring test scores from the nationally representative Early Childhood Longitudinal Study – Kindergarten Cohort (ECLS-K). “Cross-year” and “within-year” VAMs frequently yield significant differences that remain even after conditioning on children’s summer activities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call