Abstract

BackgroundExaminations are the traditional assessment tools. In addition to measurement of learning, exams are used to guide the improvement of academic programs. The current study attempted to evaluate the quality of assessment items of sixth year clinical clerkships examinations as a function of assessment items format and type/structure and to assess the effect of the number of response choices on the characteristics of MCQs as assessment items.MethodsA total of 173 assessment items used in the examinations of sixth year clinical clerkships of a PharmD program were included. Items were classified as case based or noncase based and as MCQs or open-ended. The psychometric characteristics of the items were studied as a function of the Bloom’s levels addressed, item format, and number of choices in MCQs.ResultsItems addressing analysis skills were more difficult. No differences were found between case based and noncase based items in terms of their difficulty, with a slightly better discrimination in the latter. Open-ended items were easier, yet more discriminative. MCQs with higher number of options were easier. Open-ended questions were significantly more discriminative in comparison to MCQs as case based items while they were more discriminative as noncase based items.ConclusionItem formats, structure, and number of options in MCQs significantly affected the psychometric properties of the studied items. Noncase based items and open-ended items were easier and more discriminative than case based items and MCQs, respectively. Examination items should be prepared considering the above characteristics to improve their psychometric properties and maximize their usefulness.

Highlights

  • Data collection Assessment items used in paper-based examinations of six clinical clerkship rotations (Cardiology, Critical Care, Respiratory, Endocrinology, Oncology, and Nephrology) of the sixth/senior year Doctor of Pharmacy (PharmD) program offered by the School of Pharmacy at the University of Jordan, (SP-UJ) were collected

  • A total of a hundred and seventy-three items, each answered by 72–83 students were evaluated. These items were collected from 6 different final examinations of clinical clerkships during the senior year of PharmD program offered by the SP-UJ

  • The reliability of the studied items differed according to item type; Multiple Choice Questions (MCQ) had an average Cronbach’s Alpha of .61, while for open ended items the average Cronbach’s Alpha was

Read more

Summary

Introduction

In addition to measurement of learning, exams are used to guide the improvement of academic programs. Examinations are the traditional evaluation method of students’ performance used by instructors throughout educational history [1]. Good quality examinations are essential for generating reliable data to measure student learning, The Accreditation Council for Pharmacy Education (ACPE) standards for the Doctor of Pharmacy (PharmD) programs recommends the implementation of an extensive assessment plan to prepare graduates for practice [4]. A plan should include the use of standardized, systematic, reliable, and valid assessment. AlKhatib et al BMC Medical Education (2020) 20:190 both knowledge and performance evaluation and measurement of the achieved professional competencies. The quality of tests may be inferred, at least partially, from the analysis of test items [6]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call