Abstract

The purpose of large-scale international assessments is to compare educational achievement across countries. For such cross-national comparisons to be meaningful, the participating students must be representative of the target population. In this paper, we consider whether this is the case for Canada, a country widely recognised as high performing in the Programme for International Student Assessment (PISA). Our analysis illustrates how the PISA 2015 sample for Canada only covers around half of the 15-year-old population, compared to over 90% in countries like Finland, Estonia, Japan and South Korea. We discuss how this emerges from differences in how children with special educational needs are defined and rules for their inclusion in the study, variation in school participation rates and the comparatively high rates of pupils’ absence in Canada during the PISA study. The paper concludes by investigating how Canada’s PISA 2015 rank would change under different assumptions about how the non-participating students would have performed were they to have taken the PISA test.

Highlights

  • The Programme for International Student Assessment (PISA) is an important international study of 15-year-olds’ achievement in reading, science and mathematics

  • This has the potential to bias comparisons between these nations, which the Organisation for Economic Cooperation and Development (OECD) recognises, if certain groups we would not expect to perform well on the PISA test are routinely excluded in some nations but not in others (e.g. Japan and South Korea)

  • It is for this reason that we have used freedom of information laws to obtain and publish—for the first time—the full school-level non-response bias analysis (NRBA) that was conducted for Canada in PISA 2015

Read more

Summary

Introduction

The Programme for International Student Assessment (PISA) is an important international study of 15-year-olds’ achievement in reading, science and mathematics. These sensitivity analyses estimate the scores that excluded and non-responding students would need to have achieved in order to ‘disturb’ a finding (Gorard and Gorard 2016); in other words, to make the difference between countries disappear We argue that this is a more important reflection of uncertainty in the Canadian PISA results than the standard forms of statistical inference (confidence intervals and statistical significance tests) that are routinely reported by the OECD (as it captures different forms of bias rather than just sampling variation alone). Our results illustrate how Canada’s PISA results could change in non-trivial ways, relative to other countries, under plausible assumptions about how excluded/non-responding students would have performed on the test It is, concluded that the OECD should do more to communicate the uncertainty in PISA results due to sample exclusions and missing data.

Target population and exclusions
Sample design
School response rate criteria
Non-response bias analyses
Pupil response rates
Weighting for non-response
Summary
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.