Abstract

Middle bias has been reported for responses to multiple-choice test items used in educational assessment. It has been claimed that this response bias probably occurs because test developers tend to place correct responses among middle options, tests thus presenting a middle-biased distribution of answer keys. However, this response bias could be driven by strong distractors being more frequently located among middle options. In this study, the frequency of responses to a Chilean national examination used to rank students wanting to access higher education was used to categorize distractors based on attractiveness level. The distribution of different distractor types (best distractor, non-functioning distractors…) was analyzed across 110 tests of 80 five-option items administered to assess several disciplines in five consecutive years. Results showed that the strongest distractors were more frequently found among middle options, most commonly at option C. In contrast, the weakest distractors were more frequently found at the last option (E). This pattern did not substantially vary across disciplines or years. Supplementary analyses revealed that a similar position bias for distractors could be observed in tests administered in countries other than Chile. Thus, the location of different types of distractors might provide an alternative explanation for the middle bias reported in literature for tests’ responses. Implications for test developers, test takers, and researchers in the field are discussed.

Highlights

  • Multiple-choice tests are widely used in educational assessment, students’ performance on these tests being sometimes highly consequential (Gierl et al, 2017)

  • A visual inspection of these frequencies’ distribution showed that the strongest distractors were, in general, more likely to be found among middle options, whereas the weakest ones were mostly found at the last option

  • Previous studies about response options placement have shown that answer keys are not uniformly distributed in many multiplechoice tests, keys being more frequently positioned as a middle option than as an edge option (Attali and Bar-Hillel, 2003; Authors, 2021, under review; Metfessel and Sax, 1958)

Read more

Summary

Introduction

Multiple-choice tests are widely used in educational assessment, students’ performance on these tests being sometimes highly consequential (Gierl et al, 2017). Even if item-writing guidelines have been advanced in literature to help test developers design better multiple-choice instruments (Haladyna and Downing, 1989a; Haladyna et al, 2002; Haladyna and Rodriguez, 2013), item-writing flaws are still commonly found, impacting tests’ psychometric properties, students’ scores, and even pass-fail outcomes (Downing, 2005; Tarrant and Ware, 2008; Ali and Ruit, 2015). One rather common test construction flaw is that the placement of correct responses ( called answer keys) across a test is middle-biased, key position providing an unwanted strategic clue to Distractors Are Not Uniformly Distributed examinees (Metfessel and Sax, 1958; Haladyna and Downing, 1989b; Attali and Bar-Hillel, 2003). One recent explanation for students’ response bias lies in the test developers’ own middle bias when positioning answer keys (Bar-Hillel, 2015)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call