Abstract

Assessing students’ personal characteristics, as well as the structures and processes of teaching and learning, is an integral part of the Programme for International Student Assessment (PISA). Providing input for solid evidence-based educational policies, one of the main aims of PISA, creates huge methodological challenges: Various biases in self-reported data across cultures pose a persistent challenge in unpacking the black box of student learning; these biases jeopardize PISA’s scope for evidence-based policy making. This chapter focuses on challenges in the design and analysis of PISA background questionnaires, especially in noncognitive outcome measures. Our conceptual background however is not primarily PISA-related but is comparative work in the social and behavioral sciences more broadly. We first review sources of bias at construct, method, and item levels, as well as levels of equivalence (construct, metric, and scalar invariance), using examples from educational surveys. We then illustrate the strategies used in the PISA project to deal with different types of bias. Specifically, qualitative, non-statistical strategies such as instrument development (adaptation), standardization of assessment procedures, and statistical strategies to mitigate bias are outlined. State-of-the-art psychometric procedures to examine the comparability of these noncognitive outcome data, including partial invariance and approximate invariance, are also discussed. We conclude by suggesting future research topics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call