Abstract

IntroductionVarious studies have examined Objective Structured Clinical Examinations (OSCEs) in Dentistry in order to accumulate validity evidence for their use as an assessment tool of clinical competence in students. In this article, a newly designed OSCE in Dentistry (OSCE-D) is introduced and discussion is presented on the results of an analysis from the perspective of generalisability theory using data obtained from an application of the examination. MethodAn observational and cross-sectional study was conducted in the Faculty of Dentistry at UNAM. One hundred and twenty pre-graduate students participated in an OSCE that consisted of 18 stations, with a duration of 6min each, in the context of a fourth-grade Paediatric Dentistry course. An analysis based on generalisability theory, with raters and stations being considered as facets, identified the main sources of variability in the data. ResultsThe overall mean (and standard deviation) of the OSCE score, across participants and stations, was 44% (7%), with the station means varying between 23% and 63%. The generalisability study showed that the facet of the raters explained a significant portion (13%) of the variance in the station results, which was more than the clinical competence of the participants (6%). The decision study produced a generalisability index of 0.63 and a dependability index of 0.55. ConclusionsIn view of the rather low reliability indices from the decision study, it is important to make a further analysis of the OSCE-D so as to minimise the effect of sources that introduce construct-irrelevant variance into the results. In particular, an adjustment of the stations may be required, as well as a better standardising in the use of evaluation criteria by the raters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call