Stephen Krashen got the numbers wrong in his June 2002 Kappan article, according to Mr. Innes, who seeks to set the record straight. THERE is compelling evidence, writes Stephen Krashen, that a reported decline in reading performance in California in the late 1980s and early 1990s really didn't occur, or at the very least was not caused by the introduction of whole-language reading in the state in 1987.1 He presents two sets of testing data and some other statistics to support his assertions. But Krashen's analysis of those data has a number of problems. Krashen Gets the Scoring Data Wrong Krashen claims there is an absence of a clear pattern of changes in the California Achievement Program (CAP) reading scores. That, he says, indicates whole language did not cause a noticeable decline in reading. As evidence he presents a table of CAP data for grades 3, 6, 8, and 12 for the years from 1984 to 1990. But Krashen's line of reasoning concerning the CAP is illogical. When the data are thoughtfully examined for impacts on young readers, the results actually refute Krashen's contentions. Even if teachers throughout California adopted whole-language methods instantaneously in 1987 (a highly unlikely situation), very few of Krashen's data points are relevant to his argument. For example, the first-graders of 1987 took the third-grade CAP in 1989, and 1988's first- graders took the CAP in 1990. Among all the CAP data Krashen provides, only these performances can be considered as purely the product of the whole-language program. In addition, second-graders from 1987 took the CAP in 1988. They had a mixed experience of whole language and the previous instructional methods for reading. But all of the other students listed in Krashen's CAP data table should have learned the basics of reading before California's whole-language reform ever started. In particular the large amount of CAP data that Krashen presents for grades 6 through 12 can provide no support for his argument whatsoever. None of these older students had any involvement with California's whole-language reading reform until well after their reading habits were established. Quite simply, most of Krashen's data are superfluous to his argument and only serve to cloud the issue. But third-grade data in Krashen's table do pertain, and they disprove his contentions. Third-grade reading scores on the CAP rose from 268 to 282 between 1984 and 1987. That period predates California's whole-language reform. These CAP data actually support the idea that California was improving early reading instruction before the whole-language reform began. Things changed when whole language came along. CAP third-grade scores stayed flat in 1988. Students who were in second grade when the whole- language reform began took CAP in this year, and these are the students who had a mixed experience with whole language. Then in 1989, the first group of students who had been taught exclusively from first grade on with California's whole-language program took CAP. Their scores declined. Third-grade reading declined again the following year as the second group of students with exclusively whole-language experience took CAP. Thus the CAP scores, thoughtfully analyzed, directly undermine Krashen's argument. Krashen's problems with analyzing test scores do not end with the CAP. His discussion of California's fourth-grade reading scores on the National Assessment of Educational Progress (NAEP) is incomplete. Krashen correctly says that NAEP didn't start generating state-level fourth-grade reading scores until 1992. He correctly discusses the fact that California scored a 202 on the fourth-grade NAEP reading assessment in 1992 and got a score of 197 in 1994. California's fourth-grade NAEP reading scores then recovered to 202 in 1998. But what Krashen does not say (and NAEP reports are not terribly forthcoming about this situation, either) is that the exclusion of students with learning disabilities steadily eroded NAEP's validity during this period. …
Read full abstract