Abstract
BackgroundThe number of medical students accepted into medical programs is increasing, which has made the traditional long/short case style of examination difficult to conduct. At Dammam University, the program is shifting to the use of the Objective Structural Clinical Examination (OSCE), which may solve some of these difficulties, including issues with reliability, validity index and exam duration.ResultsA pilot study was conducted over one semester. A total of 207 examinees in three groups took the OSCE and written exams. The OSCE consisted of 18 clinical stations and required 3–4.3 h/day. The written exam contained 80 multiple-choice questions. The Cronbach’s alpha for each group was 0.7, 0.8, and 0.9. Correlations for all stations ranged from 0.7 to 0.8, which indicated good stability and internal consistency with minor differences in the progression of the indexes. The reliability of the written exam was 0.79, and the validity of the OSCE was 0.63, as assessed using Pearson’s correlation.ConclusionNo single reliability index can be considered a perfect assessment tool to solve this issue. Thus, at least two to three indexes should be used to ensure the reliability of the OSCE.
Highlights
The number of medical students accepted into medical programs is increasing, which has made the traditional long/short case style of examination difficult to conduct
Spearman’s rank correlation and R2 coefficient determinants were used to correlate the checklist results with the global score to arrive at an internal consistency score
The correlations were 0.7, 0.7, and 0.8 (p < 0.001) for both Cronbach’s alpha and Spearman’s rank correlation, which indicated a strong correlation between the checklist score and global rating on all days of the exam
Summary
The number of medical students accepted into medical programs is increasing, which has made the traditional long/short case style of examination difficult to conduct. The R2 coefficient is a measure of the proportional change in the dependent variable (in our case, the checklist score) compared to changes in the independent variable (the global grade) It is a marker of internal consistency [6–14], but the index is imperfect; if the examiner makes the checklist score correspond to the global score, which means the students did all the items in the checklist, the global score would be a clear pass and vice versa. This would result in false inflation of the R2 because the global rating would score the student’s confidence, organization and professional application of clinical skills, which might not be included in the checklist sheets [14]. Most published reports have been about the advantages of OSCE as a reliable and valid examination method, but none have focused on the reliability of the indexes used in the assessment of the exam and whether a small difference between them means a single index is sufficient [17, 20]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have