Abstract
Papay (2011) noticed that teacher value-added measures (VAMs) from a statistical model using the most common pre/post testing timeframe–current-year spring relative to previous spring (SS)–are essentially unrelated to those same teachers’ VAMs when instead using next-fall relative to current-fall (FF). This is concerning since this choice–made solely as an artifact of the timing of statewide testing–produces an entirely different ranking of teachers’ effectiveness. Since subsequent studies (grades K/1) have not replicated these findings, we revisit and extend Papay’s analyses in another Grade 3–8 setting. We find similarly low correlations (.13–.15) that persist across value-added specifications. We delineate and apply a literature-based framework for considering the role of summer learning loss in producing these low correlations.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have