Abstract

The study compares student ratings of perceived difficulty for 10 objective structured clinical exam (OSCE) stations with faculty ratings of difficulty and actual student test performance. In 1995, 171 medical students completed the OSCE at the end of their third year. After the OSCE, students were asked to evaluate the difficulty of each station using a 5 point scale (5=Easy to 1=Too difficult). Faculty members had rated the difficulty of each station before the exam. Kendall (nonparametric) correlations were used to determine the associations among the ranked average difficulty ratings of students and faculty, and the ranked average test performance for each station. The student and faculty ratings of station difficulty did not correlate -.20, (p=.42). The student difficulty rating did not correlate with student performance (.13, p= .59) nor did faculty difficulty rating correlate with student performance (.27, p=.28). These results indicate that students and faculty do not agree on the relative level of difficulty of each task. The correlations of both student and the faculty ratings to actual performance (score) suggest that neither are very accurate predictors of actual performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call