Abstract

Much has been written in the last decade about the spotlight that the No Child Left Behind Act (NCLB) shines on school performance. Proponents and opponents alike are quick to discuss the law’s rigid definitions of school performance— exemplified by the classification of schools as making Adequate Yearly Progress (AYP) or not making AYP based largely on annual tests in reading and mathematics, disaggregating school performance by student subgroups, and requiring that all schools reach 100% proficiency. Yet for all its rigidity, the law has offered schools little guidance on how to make use of the performance data that the new systems provide or how to design improvement efforts. As policymakers discuss ways to change NCLB or design new federal education policies targeted at improving academic achievement, we present new research findings that can help to inform those discussions. NCLB is based on the assumption that by using new data provided by testing, drawing public attention to student performance, and establishing sanctions for poor results, teachers and school leaders will be motivated and able to identify and adopt successful strategies for their students (Stecher, Epstein, Hamilton, Marsh, Robyn, McCombs et al., 2008; Hamilton, Berends, & Stecher, 2005; Haertel & Herman, 2005; Linn, 2005). In order for this assumption to be accurate, being identified as “in need of improvement” (the designation for schools that fail to meet AYP goals) would have to set off a chain reaction, wherein school or district personnel examine performance data, draw conclusions about where their challenges lie, search for programs and materials to address their challenges, and finally implement those new programs, balancing fidelity to the programs’ designs with sensitivity to local context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call