Abstract

Psychological achievement and aptitude tests are fundamental elements of the everyday school, academic and professional lives of students, instructors, job applicants, researchers and policymakers. In line with growing demands for fair psychological assessment tools, we aimed to identify psychometric features of tests, test situations and test-taker characteristics that may contribute to the emergence of test bias. Multi-level random effects meta-analyses were conducted to estimate mean effect sizes for differences and relations between scores from achievement or aptitude measures with open-ended (OE) versus closed-ended (CE) response formats. Results from 102 primary studies with 392 effect sizes revealed positive relations between CE and OE assessments (mean r = 0.67, 95% CI [0.57; 0.76]), with negative pooled effect sizes for the difference between the two response formats (mean d av = -0.65; 95% CI [-0.78; -0.53]). Significantly higher scores were obtained on CE exams. Stem-equivalency of items, low-stakes test situations, written short answer OE question types, studies conducted outside the United States and before the year 2000, and test-takers' achievement motivation and sex were at least partially associated with smaller differences and/or larger relations between scores from OE and CE formats. Limitations and the results' implications for practitioners in achievement and aptitude testing are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call