Abstract
Enough is enough, Mr. Pogrow says. There is now a large and consistent set of independent studies concluding that there is no effect from Success for All, New American Schools, or any other schoolwide reform model. It is time to admit the mistakes of the Nineties and change reform policy and practice. NEW CLAIMS for the effectiveness of Success for All (SFA) by its developers and their associates need to be viewed in the context of a documented history, a history of scientifically invalid research creating the appearance of in a way that masks actual failure. Indeed, this is the seventh article that I have written bird-dogging the validity of the claims behind SFA, sometimes referred to as Roots & Wings. It is the third in this journal.1 I write this in response to Effects of Success for All on TAAS Reading Scores: A Texas Statewide Evaluation, by Eric Hurley, Anne Chamberlain, Nancy Madden, and Robert Slavin, which was featured in the June 2001 Kappan. The reason I have tracked the claims and actual of SFA is that never before has the presumed of a program enabled its developers and associates to exert such powerful influence over so many different aspects of the profession. The developers and associates have controlled the research about the effectiveness of SFA as well as government policy studies on how to help the disadvantaged. Thus, if SFA is not successful, the resulting policies for helping the disadvantaged during the 1990s were misguided, and much of the research base generated for helping the disadvantaged is invalid. Despite the widespread rhetoric of know what works, it turns out that learning gaps between minority students and white students on the National Assessment of Educational Progress (NAEP) actually increased in the 1990s, and the reading of the lowest quartile actually declined.2 In other words, despite the strong reform efforts of the decade, we actually went backward. In retrospect, this result should not be surprising, given that the supposed success of SFA led to the major policy shift of withdrawing specialized help for the disadvantaged in favor of schoolwide reform models. Baltimore: A Forecast of Future Failure SFA started as an experiment in Baltimore. Slavin and Madden published a series of articles purporting to show that SFA schools were doing substantially better. The articles featured many impressive statistical tables of effect sizes that dramatically favored SFA schools. In my previous articles I showed how there were no real differences3 and that the creamed sample of SFA students, which consisted only of students who had been at the same school for five years and from which virtually all the special education students had disappeared, were reading three years below grade level by the time they reached sixth grade. (This equates to a more representative sample of students reaching the sixth grade four or more years below grade level.) Other studies subsequently documented the obvious failure,4 and the district dropped the program. But in the interim the profession and policy makers came to view this experiment as having demonstrated the of SFA. Building the SFA Empire Money began to flow to SFA and to its research center and foundation. An industry was created to produce articles showing the of SFA. My previous articles have shown that these studies invariably either stacked the deck, dropped schools and/or groups of students, or claimed even though the students were generally still doing terribly. In the meantime, the belief that SFA was successful led the U.S. Department of Education (ED) to focus on comprehensive school reform and to award one-third of its discretionary research and development (R&D) funding to SFA-related work. ED also awarded the key policy analysis study on the future of Title I to the center at Johns Hopkins University where SFA was developed. …
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have