Background/Context Two key uses of international assessments of achievement have been (a) comparing country performances for identifying the countries with the best education systems and (b) generating insights about effective policy and practice strategies that are associated with higher learning outcomes. Do country rankings really reflect the quality of education in different countries? What are the fallacies of simply looking to higher performing countries to identify strategies for improving learning in our own countries? Purpose In this article we caution against (a) using country rankings as indicators of better education and (b) using correlates of higher performance in high ranking countries as a way of identifying strategies for improving education in our home countries. We elaborate on these cautions by discussing methodological limitations and by comparing five countries that scored very differently on the reading literacy scale of the 2009 PISA assessment. Population We use PISA 2009 reading assessment for five countries/jurisdictions as examples to elaborate on the problems with interpretation of international assessments: Canada, Shanghai-China, Germany, Turkey, and the US, i.e., countries from three continents that span the spectrum of high, average, and low ranking countries and jurisdictions. Research Design Using the five jurisdiction data in an exemplary fashion, our analyses focus on the interpretation of country rankings and correlates of reading performance within countries. We first examine the profiles of these jurisdictions with respect to high school graduation rates, school climate, student attitudes and disciplinary climate and how these variables are related to reading performance rankings. We then examine the extent to which two predictors of reading performance, reading enjoyment and out of school enrichment activities, may be responsible for higher performance levels. Conclusions This article highlights the importance of establishing comparability of test scores and data across jurisdictions as the first step in making international comparisons based on international assessments such as PISA. When it comes to interpreting jurisdiction rankings in international assessments, researchers need to be aware that there is a variegated and complex picture of the relations between reading achievement ranking and rankings on a number of factors that one might think to be related individually or in combination to quality of education. This makes it highly questionable to use reading score rankings as a criterion for adopting educational policies and practices of other jurisdictions. Furthermore, reading scores vary greatly for different student sub-populations within a jurisdiction – e.g., gender, language, and cultural groups – that are all part of the same education system in a given jurisdiction. Identifying effective strategies for improving education using correlates of achievement in high performing countries should be also done with caution. Our analyses present evidence that two factors, reading enjoyment and out of school enrichment activities, cannot be considered solely responsible for higher performance levels. The analyses suggests that the PISA 2009 results are variegated with regards to attitudes towards reading and out-of-school learning experience, rather than exhibiting clear differences that might explain the different performances among the five jurisdictions.
Read full abstract