Abstract

A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue that the moderating role of speededness on item order effects cannot be neglected as missing responses are commonly scored as incorrect in high-stakes testing. If test-takers run out of time while not giving answers to easy items at the end of the test, they are penalised stronger than if instead they were unable to provide answers to difficult items. Using an illustrative real-data example of a speeded test, we show that the potential consequences of ignoring item order can be substantial with respect to test fairness. Our proposed solution consists of using a fixed item order across forms from the point at which the test may become speeded for some students. In this approach, the most time-intensive items are placed at the end of the test. A simulation based on real data of two university examinations from psychology students illustrates the usefulness of this approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call