Abstract
International surveys are increasingly being used to understand nonacademic outcomes like math and science motivation, and to inform education policy changes within countries. Such instruments assume that the measure works consistently across countries, ethnicities, and languages—that is, they assume measurement invariance. While studies have already demonstrated that some items in international survey measures are noninvariant using basic group comparisons, they do not investigate complex, intersectional sources of bias that go beyond group membership. In this study, we use an emergent method to examine the sensitivity of item parameters from the Programme for International Student Assessment (PISA) survey instruments to intersectional sources of bias. Results indicate that non-invariance exists for most of the items examined, which can change individual scores after accounting for the moderators. Although country-level ranking did not change substantively after accounting for bias, policymaking is likely to be influenced negatively if such sources of non-invariance are not addressed.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have