Abstract

Although many instructors prefer multiple-choice (MC) items due to their convenience and objectivity, many others eschew their use due to concerns that they are less fair than constructed response (CR) items at evaluating student mastery of course content. To address three common unfairness concerns, I analyzed performance on MC and CR items from tests within nine sections of five different biology courses I taught over a five-year period. In all nine sections, students’ scores on MC items were highly correlated with their scores on CR items (overall r = 0.90), suggesting that MC and CR items quantified mastery of content in an essentially equivalent manner—at least to the extent that students’ relative rankings depended very little on the type of test item. In addition, there was no evidence that any students were unfairly disadvantaged on MC items (relative to their performance on CR items) due to poor guessing abilities. Finally, there was no evidence that females were unfairly assessed by MC items, as they scored 4% higher on average than males on both MC and CR items. Overall, there was no evidence that MC items were any less fair than CR items testing within the same content domain.

Highlights

  • Multiple-choice questions are popular with teachers for a number of reasons: they are efficient, objective, and generally highly reliable tools for assessment of mastery content (Haladyna & Rodriguez, 2013; Rodriguez & Albano, 2017)

  • The results of the current study on test items from nine biology courses should help alleviate lingering concerns that multiple-choice items may be less fair to students than constructed-response items

  • The high positive correlations between students’ scores on MC and constructed response (CR) items within courses suggest that the two types of items were essentially valid tests of mastery

Read more

Summary

Introduction

Multiple-choice questions are popular with teachers for a number of reasons: they are efficient, objective, and generally highly reliable tools for assessment of mastery content (Haladyna & Rodriguez, 2013; Rodriguez & Albano, 2017). The unproven notion is that the good guessers will tend to be the students with low mastery, while the students who have mastered the content better will be poor guessers This unproven notion is at the heart of the second fairness issue regarding MC items addressed in the current study. More empirical data on the issue of gender differences in performance ( in classroom settings) would be of great value for teachers who use or are considering the use of MC items in their tests To address these three fairness issues, I examined five years of test-performance data from biology courses that I have taught at Roanoke College. I analyzed students’ relative performance on MC and CR items within courses to ask three main questions: (1) How strongly are students’ scores on MC and CR items correlated within tests of the same set of constructs? (A low or negative correlation would suggest unfairness in terms of unequal construct validity.) (2) Are some students substantially worse at answering MC items than their performance on CR items would predict? (Such a pattern would suggest MC items are unfair to “bad guessers.”) (3) Do females (or males) perform worse on MC items than their performance on CR items would predict? (Such a pattern would be evidence of gender-based unfairness.)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.