Abstract

SOMETIMES I have said, more or less in jest, that when it comes to evaluating international studies, people lose their critical capacities and forget everything they ever learned in their courses in research methods and statistics. Results that would raise flags in any other endeavor receive no attention. One senses that governments that participate in these studies, as well as Organisation for Economic Co-operation and Development (OECD), are more interested in having studies accepted than they are in examining them for critical problems. One wonders why. The tendency to accept results of these studies on their face has been, naturally, most prominent in those nations that score well. When French did not perform so well in PISA (Programme for International Student Assessment), some French analysts claimed that it was because of some Anglo-Saxon bias in format of questions. This claim promptly brought forth refutations, including some from researchers in other Francophone countries. Countries that did well tended to gloat. The uncritical gloating in England bothered one S. J. Prais of National Institute of Economic and Social Research in London enough that he published an analysis in Oxford Review of Education (vol. 29, no. 2, 2003). Prais observed that in Third International Mathematics and Science Study (in both TIMSS-95 and TIMSS-99 -- referred to as in U.S.), United Kingdom had scored some 40 points lower than Switzerland, France, Belgium, Czech Republic, and Hungary. Yet in PISA, British had finished some 20 points ahead of these nations. The total 60-point difference represents two-thirds of a standard deviation -- not an insignificant number. Even more remarkable, Prais observed, this enormous improvement had occurred in only one year, between 1999 administration of TIMSS-R and 2000 administration of PISA. Most remarkable of all, because TIMSS-R had tested 14-year-olds in 1999, while PISA had tested 15-year- olds in 2000, both studies had sampled same group of students: those born in 1984. When early data on PISA appeared, Prais reports that he and his colleagues were merely perplexed. They had no way of probing deeper into apparent contradiction. With emergence of fuller reports, though, they came up with a series of troubling questions that pertained to the nature of questions asked of pupils in PISA survey in contrast to previous surveys; differences in intended age-group covered in surveys; . . . representativeness of English schools agreeing to participate; and representativeness of pupils actually taking test within each English school. Prais limited his analysis of items to those in mathematics largely because earlier assessments had revealed math difficulties for English students. First, Prais reminded readers that PISA was not concerned with mastery of school subjects, or so it said, but with ability to apply math knowledge to On PISA website, one reads, Previous international assessments have concentrated on 'school' knowledge. PISA aims at measuring how well students perform beyond school curriculum (www.pisa.oecd.org/pisa/skills.htm). Prais questioned whether math questions actually had anything to do with everyday life. He offered this item as an example: A graph shows fluctuating of a racing car along a flat 3 km racing track (second lap) against distance covered along track. Pupils were assumed to know that track was some form of a closed loop; they were also assumed to know that normally there are various bends along and that speed had to be reduced before entering each bend. Pupils were required to calculate from graph approximate distance from starting time to beginning of longest straight section of track, and similar matters; they then had to match speed-distance graph with five possible track- circuit diagrams. …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call