Any independent reader or reviewer who took the time to read the actual evidence would have to conclude that the achievement effects of SFA are thoroughly and scientifically documented, and many reviewers have concluded exactly this, Mr. Slavin retorts. IN HIS seventh article attacking research on Success for All, Stanley Pogrow engages in his usual tactics: making baseless accusation after baseless accusation in the hope that readers, who must long ago have stopped paying attention to the content itself, will figure (incorrectly) that with so many charges, some of them must be true. The current critique is in response to an article in the June Kappan.1 This study presents Pogrow with a difficult challenge. Here is a study of 111 schools across Texas, reporting data from the Texas Assessment of Academic Skills (TAAS) Reading Scale that anyone with an Internet account could easily replicate. These schools serve more than 55,000 children. Their progress was compared to that of all Texas children and to that of subgroups according to ethnicity. These whole-state comparisons were made specifically to avoid charges that we selected particular control groups after the fact to make Success for All look good. Apparently recognizing the impossibility of impeaching a study of this size, quality, and policy relevance, Pogrow spends a great deal of time rehashing decade-old criticisms about Baltimore and other issues that have nothing to do with Texas. I'll return to these. However, let's start with the Texas evaluation. Pogrow's main claim, stated at the outset, is that Success for All (SFA) couldn't possibly have worked in Texas, because SFA doesn't work. It's hard to respond to this circular reasoning. Pogrow then goes on to say that it's absurd to claim that SFA is responsible for the overall reduction in the minority/white achievement gap or for the overall rise in TAAS scores. Yet we never made any such claim. In each case, we compared the gains made by SFA students to those of other Texas students. TAAS scores did rise, but those of SFA students rose significantly more. The gap did narrow, but the gap for SFA students narrowed significantly more. Pogrow, who has opposed research involving control groups,2 does not appear to understand this most fundamental element of research design. He then criticized our comparison of scores for (mostly minority) SFA students to those of the state as a whole, on the basis that these are not equivalent groups. Yet we addressed exactly this issue in our article, making separate comparisons of African American students in SFA schools to African American students in the state as a whole and Hispanic SFA students to Hispanic students in the state as a whole. In both cases, the SFA and other students were similar at pretest, but in both cases the African American and Hispanic students in SFA schools gained significantly more than similar students elsewhere in Texas. To save readers the trouble of finding their June issue, I reproduce the relevant graphs below. Pogrow accuses us of making SFA look good by stopping our analyses with the 1998 data. At the time we wrote the June article, only 1999 data were available, and the state itself cautioned against longitudinal analyses because of the significant changes in TAAS procedures introduced in 1999. In fact, Pogrow himself cites bilingual specialists who told him exactly this and suggested doing exactly what we did - analyzing scores through 1998. However, we now have data through 2001 (see Figure 3). Once again, analyzing at the school level, SFA students gained significantly more than other Texas students. Because of the testing change in 1999, Figure 3 understates the amount of gain for both SFA and state scores, but both sets of scores are probably affected to a similar degree. Note that Figure 3 includes every school in the original study except one, which closed in 1999. Pogrow accuses us of failing to include every Texas SFA school that began the program between 1994 and 1997, as we claimed. …
Read full abstract