Abstract

BackgroundExams are essential components of medical students’ knowledge and skill assessment during their clinical years of study. The paper provides a retrospective analysis of validity evidence for the internal medicine component of the written and clinical exams administered in 2012 and 2013 at King Abdulaziz University’s Faculty of Medicine.MethodsStudents’ scores for the clinical and written exams were obtained. Four faculty members (two senior members and two junior members) were asked to rate the exam questions, including MCQs and OSCEs, for evidence of content validity using a rating scale of 1–5 for each item.Cronbach’s alpha was used to measure the internal consistency reliability. Correlations were used to examine the associations between different forms of assessment and groups of students.ResultsA total of 824 students completed the internal medicine course and took the exam. The numbers of rated questions were 320 and 46 for the MCQ and OSCE, respectively. Significant correlations were found between the MCQ section, the OSCE section, and the continuous assessment marks, which include 20 long-case presentations during the course; participation in daily rounds, clinical sessions and tutorials; the performance of simple procedures, such as IV cannulation and ABG extraction; and the student log book.Although the OSCE exam was reliable for the two groups that had taken the final clinical OSCE, the clinical long- and short-case exams were not reliable across the two groups that had taken the oral clinical exams. The correlation analysis showed a significant linear association between the raters with respect to evidence of content validity for both the MCQ and OSCE, r = .219 P < .001 and r = .678 P < .001, respectively, and r = .241 P < .001 and r = .368 P = .023 for the internal structure validity, respectively. Reliability measured using Cronbach’s alpha was greater for assessments administered in 2013.ConclusionThe pattern of relationships between the MCQ and OSCE scores provides evidence of the validity of these measures for use in the evaluation of knowledge and clinical skills in internal medicine. The OSCE exam is more reliable than the short- and long-case clinical exams and requires less effort on the part of examiners and patients.

Highlights

  • Exams are essential components of medical students’ knowledge and skill assessment during their clinical years of study

  • The aim of this study is to evaluate the reliability and to present validity evidence for both the written and clinical exams administered to sixth-year medical students during the internal medicine rotation at the King Abdulaziz University (KAU) Faculty of Medicine in 2012 and 2013

  • In 2012, 888 short cases were provided in the midterm exam, and a similar number was used in the final exam along with 444 long cases

Read more

Summary

Introduction

Exams are essential components of medical students’ knowledge and skill assessment during their clinical years of study. Exams are essential for assessing undergraduate medical students in their clinical years of study because their future careers as clinicians are dependent on their competency and knowledge [1,2,3]. These changes included shifting the system of teaching in the Faculty of Medicine from a yearbased system to an integrated block-based system This transition involved modifying examination and assessment methods and administering OSCEs to clinical-year medical students enrolled in the four major clinical departments. The mid-rotation exams and final exams include both written and clinical components to ensure a thorough assessment of students’ knowledge and skills. The validity and reliability of clerkship examinations are essential to ensure the proper measurement of necessary competence and skills in a consistent manner

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call