Abstract

The effectiveness of multiple-choice (MC) items depends on the quality of the response options—particularly how well the incorrect options (“distractors”) attract students who have incomplete knowledge. It is often contended that test-writers are unable to devise more than two plausible distractors for most MC items, and that the effort needed to do so is not worthwhile in terms of the items’ psychometric qualities. To test these contentions, I analyzed students’ performance on 545 MC items across six science courses that I have taught over the past decade. Each MC item contained four distractors, and the dataset included more than 19,000 individual responses. All four distractors were deemed plausible in one-third of the items, and three distractors were plausible in another third. Each increase in plausible distractor led to an average of a 13% increase in item difficulty. Moreover, an increase in plausible distractors led to a significant increase in the discriminability of the items, with a leveling off by the fourth distractor. These results suggest that—at least for teachers writing tests to assess mastery of course content—it may be worthwhile to eschew recent skepticism and continue to attempt to write MC items with three or four distractors.

Highlights

  • Multiple-choice items are popular for standardized tests, and for classroom assessment of mastery content

  • When three or more distractors were chosen per item, the third-most plausible distractor was chosen by 16% of the students who chose an incorrect response

  • Even the least plausible distractor made up a substantial minority (10% on average) of the incorrect responses when all four distractors were chosen for an item (Figure 2)

Read more

Summary

Introduction

Multiple-choice items are popular for standardized tests, and for classroom assessment of mastery content. Due to concerns about guessing, most test-writers tend to avoid using MC items that do not contain at least four or five options (i.e., three or four distractors) (Haladyna et al, 2002; Frey, Petersen, Edwards, Pedrotti, & Peyton, 2005; Thorndike & Thorndike-Christ, 2010) In contrast to this tradition of maximizing the number of distractors per item, recent research appears to be converging on a recommendation of three-option MC items (with one key and two distractors) (Bruno & Dirkzwager, 1995; Haladyna et al, 2002; Rodriguez, 2005; Tarrant, Ware, & Mohammed, 2009; Kilgour & Tayyaba, 2016; Vegada, Shukla, Khilnani, Charan, & Desai, 2016; Gierl et al, 2017). Accepting for the moment that a test-writer can devise multiple plausible distractors for at least some MC items on a test, the most pertinent question is how arbitrarily minimizing the number of response options would affect the performance of an MC item in terms of psychometric variables

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.