Abstract

BackgroundFailure to adhere to standard item-writing guidelines may render examination questions easier or more difficult than intended. Item complexity describes the cognitive skill level required to obtain a correct answer. Higher cognitive examination items promote critical thinking and are recommended to prepare students for clinical training. This study evaluated faculty-authored examinations to determine the impact of item-writing flaws and item complexity on the difficulty and discrimination value of examination items used to assess third year veterinary students.MethodsThe impact of item-writing flaws and item complexity (cognitive level I-V) on examination item difficulty and discrimination value was evaluated on 1925 examination items prepared by clinical faculty for third year veterinary students.ResultsThe mean (± SE) percent correct (83.3 % ± 17.5) was consistent with target values in professional education, and the mean discrimination index (0.18 ± 0.17) was slightly lower than recommended (0.20). More than one item-writing flaw was identified in 37.3 % of questions. The most common item-writing flaws were awkward stem structure, implausible distractors, longest response is correct, and responses are series of true-false statements. Higher cognitive skills (complexity level III-IV) were required to correctly answer 38.4 % of examination items. As item complexity increased, item difficulty and discrimination values increased. The probability of writing discriminating, difficult examination items decreased when implausible distractors and all of the above were used, and increased if the distractors were comprised of a series of true/false statements. Items with four distractors were not more difficult or discriminating than items with three distractors.ConclusionPreparation of examination questions targeting higher cognitive levels will increase the likelihood of constructing discriminating items. Use of implausible distractors to complete a five-option multiple choice question does not strengthen the discrimination value.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-016-0773-3) contains supplementary material, which is available to authorized users.

Highlights

  • Failure to adhere to standard item-writing guidelines may render examination questions easier or more difficult than intended

  • All examination questions were authored by college of veterinary medicine (CVM) faculty members, intended to have one correct response, and assessed via automated grading (ScantronTM1)

  • Examination items were authored by 50 faculty members and appeared on 46 examinations in 16 third year courses (12 core and four elective courses) representing 39 credit hours (33 core and six elective credits); 1689 questions were multiple choice items and 236 were true/false items

Read more

Summary

Introduction

Failure to adhere to standard item-writing guidelines may render examination questions easier or more difficult than intended. Examination experts estimate a quality multiple choice question requires 20 to 60 min to construct and item-writing flaws are common in faculty-prepared examinations [5, 7, 8]. Item-writing flaws may render examination questions easier or more difficult than intended [7, 9,10,11,12,13,14]. Some flaws provide clues that allow unprepared students to guess the correct answer; whereas awkward, unnecessarily complex or esoteric examination items prevent prepared students from demonstrating their knowledge [7, 9]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call