Abstract

Typically, in standardised tests, girls underperform boys in mathematics while boys underperform girls in reading. Can these educational inequalities be driven by specific testing practices? This study addresses this question by innovatively using data from the largest worldwide standardized test and exploiting the random variation in the format of the tests assigned to students. I find that there is a noticeable widening of the gender gap when a larger share of questions is multiple-choice, rather than filling-the-gap questions. A 10 percentage point increase in the share of multiple-choice questions inflates female under-performance in mathematics by 0.025 standard deviations and male under-performance in reading by 0.035 standard deviations. This accounts for nearly one-quarter of the overall gender disparity found in these subjects. I provide suggestive evidence that ruling out alternative options, rather than coming up with an entire answer, may contribute to cognitive information overload among students with lower confidence levels (generally girls in mathematics and boys in reading). These cognitive loads, in turn, have implications for a student's future level of effort and performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call