Abstract
PurposeThis paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment.Design/methodology/approachThe paper presents a critical review of existing research on MCQs, then reports on an experimental study where two objective tests (using MCQs and CRQs) were set for an introductory undergraduate course. To maximise completion, tests were kept short; consequently, differences between individuals’ scores across both tests are examined rather than overall averages and pass rates.FindingsMost students who excelled in the MCQ test did not do so in the CRQ test. Students could do well without necessarily understanding the principles being tested.Research limitations/implicationsConclusions are limited by the small number of questions in each test and by delivery of the tests at different times. This meant that statistical average data would be too coarse to use, and that some students took one test but not the other. Conclusions concerning CRQs are limited to disciplines where numerical answers or short and constrained text answers are appropriate.Practical implicationsMCQs, while useful in formative assessment, are best avoided for summative assessments. Where appropriate, CRQs should be used instead.Social implicationsMCQs are commonplace as summative assessments in education and training. Increasing the use of CRQs in place of MCQs should increase the reliability of tests, including those administered in safety-critical areas.Originality/valueWhile others have recommended that MCQs should not be used (Hinchliffe 2014, Srivastava et al., 2004) because they are vulnerable to guessing, this paper presents an experimental study designed to demonstrate whether this hypothesis is correct.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have