Abstract

School districts across the United States increasingly use value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer with school-year learning and potentially bias estimates of teacher effectiveness. We investigate the practical implications of this problem by comparing estimates from “cross-year” VAMs with those from arguably more valid “within-year” VAMs using fall and spring test scores from the nationally representative Early Childhood Longitudinal Study–Kindergarten Cohort (ECLS-K). “Cross-year” and “within-year” VAMs frequently yield significant differences that remain even after conditioning on participation in summer activities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.