Cost Analysis of the OSCE: Scoping Review.
This scoping review examined the economic components of implementing Objective Structured Clinical Examinations (OSCEs) in health professions education. Forty-nine studies published between 1986 and 2024, spanning multiple countries, professions, and simulation modalities, were analyzed. Key cost categories included standardized patients, evaluator compensation, infrastructure, logistics, and digital resources. Reported costs ranged from less than US$7 per student to more than US$150,000 per cycle. Traditional in-person OSCEs incurred higher recurring operational expenses, whereas online and hybrid formats concentrated spending in initial infrastructure and showed potential for scalability and cost containment. Significant inconsistencies in cost definitions and reporting practices limited comparability across studies. The findings highlight the urgent need for standardized economic frameworks, cost-reporting tools, and context-sensitive strategies to guide sustainable OSCE implementation, particularly in low- and middle-income countries where resource allocation must be strategic and aligned with institutional capacity.
- Research Article
31
- 10.1016/j.nedt.2015.01.007
- Jan 28, 2015
- Nurse Education Today
Application of best practice guidelines for OSCEs—An Australian evaluation of their feasibility and value
- Research Article
16
- 10.3389/fmed.2022.825502
- Feb 21, 2022
- Frontiers in medicine
The Objective Structured Clinical Examination (OSCE) has been traditionally viewed as a highly valued tool for assessing clinical competence in health professions education. However, as the OSCE typically consists of a large-scale, face-to-face assessment activity, it has been variably criticized over recent years due to the extensive resourcing and relative expense required for delivery. Importantly, due to COVID-pandemic conditions and necessary health guidelines in 2020 and 2021, logistical issues inherent with OSCE delivery were exacerbated for many institutions across the globe. As a result, alternative clinical assessment strategies were employed to gather assessment datapoints to guide decision-making regarding student progression. Now, as communities learn to “live with COVID”, health professions educators have the opportunity to consider what weight should be placed on the OSCE as a tool for clinical assessment in the peri-pandemic world. In order to elucidate this timely clinical assessment issue, this qualitative study utilized focus group discussions to explore the perceptions of 23 clinical assessment stakeholders (examiners, students, simulated patients and administrators) in relation to the future role of the traditional OSCE. Thematic analysis of the FG transcripts revealed four major themes in relation to participants' views on the future of the OSCE vis-a-vis other clinical assessments in this peri-pandemic climate. The identified themes are (a) enduring value of the OSCE; (b) OSCE tensions; (c) educational impact; and (d) the importance of programs of assessment. It is clear that the OSCE continues to play a role in clinical assessments due to its perceived fairness, standardization and ability to yield robust results. However, recent experiences have resulted in a diminishing and refining of its role alongside workplace-based assessments in the new, peri-pandemic programs of assessment. Future programs of assessment should consider the strategic positioning of the OSCE within the context of utilizing a range of tools when determining students' clinical competence.
- Research Article
- 10.1016/j.cptl.2025.102520
- Feb 1, 2026
- Currents in pharmacy teaching & learning
Intra-rater reliability of in-person versus simulated remote synchronous faculty evaluation of pharmacy student objective structured clinical examinations.
- Research Article
27
- 10.1080/0142159x.2020.1795100
- Jul 31, 2020
- Medical Teacher
Objective Structured Clinical Examinations (OSCEs) are a dominant, yet problematic, assessment tool across health professions education (HPE). OSCEs’ standardised approach aligns with regulatory accountability, allowing learners to exchange exam success for the right to practice. We offer a sociohistorical account of OSCEs’ development to support an interpretation of present assessment practices. OSCEs create tensions. Preparing for OSCE success diverts students away from the complexity of authentic clinical environments. Students will not qualify and will, therefore, be of no use to patients without getting marks providing evidence of competence. Performing in a formulaic and often non patient-centred way is the price to pay for a qualification. Acknowledging the stultifying effect of standardising human behaviour for OSCEs opens up possibilities to release latent energy for change in medical education. In this imagined future, the overall object of education is refocused on patient care.
- Research Article
5
- 10.4066/amj.2011.755
- Jun 30, 2011
- The Australasian medical journal
The Objective Structured Clinical Examination (OSCE) is a widely used tool for the assessment of clinical competence in health professional education. The goal of the OSCE is to make reproducible decisions on pass/fail status as well as students' levels of clinical competence according to their demonstrated abilities based on the scores. This paper explores the use of the polytomous Rasch model in evaluating the psychometric properties of OSCE scores through a case study. The authors analysed an OSCE data set (comprised of 11 stations) for 80 fourth year medical students based on the polytomous Rasch model in an effort to answer two research questions: (1) Do the clinical tasks assessed in the 11 OSCE stations map on to a common underlying construct, namely clinical competence? (2) What other insights can Rasch analysis offer in terms of scaling, item analysis and instrument validation over and above the conventional analysis based on classical test theory? The OSCE data set has demonstrated a sufficient degree of fit to the Rasch model (Χ(2) = 17.060, DF=22, p=0.76) indicating that the 11 OSCE station scores have sufficient psychometric properties to form a measure for a common underlying construct, i.e. clinical competence. Individual OSCE station scores with good fit to the Rasch model (p > 0.1 for all Χ(2) statistics) further corroborated the characteristic of unidimensionality of the OSCE scale for clinical competence. A Person Separation Index (PSI) of 0.704 indicates sufficient level of reliability for the OSCE scores. Other useful findings from the Rasch analysis that provide insights, over and above the analysis based on classical test theory, are also exemplified using the data set. The polytomous Rasch model provides a useful and supplementary approach to the calibration and analysis of OSCE examination data.
- Research Article
12
- 10.1016/s1607-551x(09)70392-4
- Apr 1, 2007
- The Kaohsiung Journal of Medical Sciences
Implementation of an OSCE at Kaohsiung Medical University
- Research Article
32
- 10.3390/healthcare9030355
- Mar 20, 2021
- Healthcare
In response to the cancellation of in-person objective structured clinical examinations (OSCEs) prompted by confinement due to the COVID-19 pandemic, we designed a solution to adapt our traditional OSCEs to this new reality in nursing education. We implemented an innovative teaching proposal based on high-fidelity virtual OSCEs with standardized patients. The purposes of our study were to describe this innovative teaching proposal and compare nursing competence acquisition in final year nursing students through virtual and in-person OSCE modalities. The study included 234 undergraduate students: 123 students were assessed through high-fidelity virtual OSCEs during May 2020, whereas 111 students were assessed through in-person OSCEs during May 2019. The structure of OSCEs, including its stations, clinical simulated scenarios, and checklists, was the same in both OSCE modalities. The effect size of the differences among the competence categories of checklists, including their total scores, was small. Regarding our virtual OSCEs was similarly successful to in-person OSCEs, this online format was found to be useful, feasible, and cost-saving when in-person OSCE was not possible. Therefore, high-fidelity virtual OSCEs with standardized patients could be considered as another choice of OSCE not only in the current COVID-19 pandemic but could also be extended to normal situations, even post-pandemic.
- Research Article
- 10.1002/hsr2.2116
- May 1, 2024
- Health Science Reports
Objective structured clinical examination (OSCE) is well-established and designed to evaluate students' clinical competence and practical skills in a standardized and objective manner. While OSCEs are widespread in higher-income countries, their implementation in low-resource settings presents unique challenges that warrant further investigation. This study aims to evaluate the perception of the health sciences students and their educators regarding deploying OSCEs within the School of Health Sciences and Techniques of Sousse (SHSTS) in Tunisia and their efficacity in healthcare education compared to traditional practical examination methods. This cross-sectional study was conducted in June 2022, focusing on final-year Health Sciences students at the SHSTS in Tunisia. The study participants were students and their educators involved in the OSCEs from June 6th to June 11th, 2022. Anonymous paper-based 5-point Likert scale satisfaction surveys were distributed to the students and their educators, with a separate set of questions for each. Spearman, Mann-Whitney U and Krusakll-Wallis tests were utilized to test the differences in satisfaction with the OSCEs among the students and educators. The Wilcoxon Rank test was utilized to examine the differences in students' assessment scores in the OSCEs and the traditional practical examination methods. The satisfaction scores were high among health sciences educators and above average for students, with means of 3.82 ± 1.29 and 3.15 ± 0.56, respectively. The bivariate and multivariate analyzes indicated a significant difference in the satisfaction between the students' specialities. Further, a significant difference in their assessment scores distribution in the practical examinations and OSCEs was also demonstrated, with better performance in the OSCEs. Our study provides evidence of the relatively high level of satisfaction with the OSCEs and better performance compared to the traditional practical examinations. These findings advocate for the efficacy of OSCEs in low-income countries and the need to sustain them.
- Research Article
26
- 10.1080/10401334.2015.1044749
- Jul 3, 2015
- Teaching and Learning in Medicine
Construct: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. Background: The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Approach: Thirty OSCE stations administered at least twice in the years 2010–2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean–1.5SD, Mean–1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. Results: The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean–1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Conclusions: Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.
- Research Article
9
- 10.1016/j.hpe.2020.02.005
- Mar 12, 2020
- Health Professions Education
Impact of Structured Feedback on Examiner Judgements in Objective Structured Clinical Examinations (OSCEs) Using Generalisability Theory
- Research Article
22
- 10.4236/ce.2012.326142
- Jan 1, 2012
- Creative Education
Objective Structured Clinical Examinations (OSCEs) have been used globally in evaluating clinical competence in the education of health professionals. Despite the objective intent of OSCEs, scoring methods used by examiners have been a potential source of measurement error affecting the precision with which test scores are determined. In this study, we investigated the differences in the inter-rater reliabilities of objective checklist and subjective global rating scores of examiners (who were exposed to an online training program to standardise scoring techniques) across two medical schools. Examiners’ perceptions of the e-scoring program were also investigated. Two Australian universities shared three OSCE stations in their end-of-year undergraduate medical OSCEs. The scenarios were video-taped and used for on-line examiner training prior to actual exams. Examiner ratings of performance at both sites were analysed using generalisability theory. A single facet, all random persons by raters design [PxR] was used to measure inter-rater reliability for each station, separate for checklist scores and global ratings. The resulting variance components were pooled across stations and examination sites. Decision studies were used to measure reliability estimates. There was no significant mean score difference between examination sites. Variation in examinee ability accounted for 68.3% of the total variance in checklist scores and 90.2% in global ratings. Rater contribution was 1.4% & 0% of the total variance in checklist score and global rating respectively, reflecting high inter-rater reliability of the scores provided by co-examiners across the two schools. Score variance due to interaction and residual error was larger for checklist scores (30.3% vs 9.7%) than for global ratings. Reproducibility coefficients for global ratings were higher than for checklist scores. Survey results showed that the e-scoring package facilitated consensus on scoring techniques. This approach to examiner training also allowed examiners to calibrate the OSCEs in their own time. This study revealed that inter-rater reliability was higher for global ratings than for checklist scores, thus providing further evidence for the reliability of subjective examiner ratings.
- Research Article
- 10.46542/pe.2022.221.165171
- Feb 25, 2022
- Pharmacy Education
The Objective Structured Clinical Examination (OSCE) is a highly valued performance-based competency assessment that is extensively employed in medical and health professions education. In pharmacy undergraduate programmes, OSCE is an integral component of the curriculum, constituting both formative and summative assessments of the course. When the COVID-19 pandemic posed an overarching challenge in the delivery of face-to-face teaching and learning activities, academic institutions around the world ineluctably transitioned to online mode of education. Conducting OSCEs on virtual platforms presents its unique set of challenges. In the absence of physical isolation and invigilation of students, the risk of cheating and collusion is particularly high during virtual OSCEs. With the experience of conducting high-stakes OSCEs on virtual platforms at two different campuses simultaneously, the authors outline several strategies that can be implemented to ensure the academic integrity of the assessment.
- Preprint Article
- 10.21203/rs.3.rs-4959116/v1
- Sep 26, 2024
Background: Objective Structured Clinical Examination (OSCE) is a widely used clinical assessment method in health professions education. It is a reliable and objective assessment tool that accurately measures students’ clinical skills and knowledge, confirming their competence in real world practice. However, despite the OSCE being used to certify students’ clinical competency skills, many nursing students often lack the necessary clinical skills to provide quality patient care. The study aimed to explore challenges that college diploma nursing students encounter with OSCE at selected nursing colleges in Malawi. Methods: The study employed a qualitative husserlian phenomenological design at three nursing colleges: Malawi College of Health Sciences (Zomba Campus) in the Southern Region, Nkhoma College of Health Sciences in the Central Region and St. John’s Institute for Health in the Northern Region. The study recruited fifty-three final year college diploma nursing students from the three nursing colleges using purposive sampling technique. Three focus group discussions and twenty-five in-depth interviews were conducted in English, audiotaped and later transcribed verbatim. Data from both sources were triangulated and then manually analyzed using Colaizzi’s data analysis method. Results: Three themes related to challenges faced by nursing students regarding OSCE emerged from the Colaizzi’s data analysis. These included (1) emotional and psychological issues, notably high levels of stress and anxiety related to the OSCE, (2) administrative difficulties and (3) academic difficulties. Conclusion: Diploma nursing students’ encounter complex challenges with OSCE in Malawi. The study findings emphasized the need for nursing education institutions to address the challenges through targeted interventions which can enhance the learning environment and produce competent nursing professionals.
- Research Article
48
- 10.1080/0142159x.2017.1309375
- Apr 11, 2017
- Medical Teacher
Background: The objective structured clinical examination (OSCE), originally designed with experts assessing trainees’ competence, is more frequently employed with an element of peer assessment and feedback. Although peer assessment in higher education has been studied, its role in OSCEs has not reviewed.Aims: The aim of this study is to conduct a scoping review and explore the role of peer assessment and feedback in the OSCE.Methods: Electronic database and hand searching yielded 507 articles. Twenty-one full records were screened, of which 13 were included in the review. Two independent reviewers completed each step of the review.Results: Peer-based OSCEs are used to assess students’ accuracy in assessing OSCE performance and to promote learning. Peer examiners (PE) tend to award better global ratings and variable checklist ratings compared to faculty and provide high-quality feedback. Participating in these OSCEs is perceived as beneficial for learning.Conclusions: Peer assessment and feedback can be used to gauge PE reliability and promote learning. Teachers using these OSCEs must use methodology which fits their purpose. Competency-based education calls for diversification of assessment practices and asks how assessment impacts learning; the peer-based OSCE responds to these demands and will become an important practice in health professions education.
- Research Article
- 10.1186/s41077-024-00307-1
- Aug 15, 2024
- Advances in Simulation
BackgroundDermatological conditions are a common reason for patients to seek healthcare advice. However, they are often under-represented in Objective Structured Clinical Examinations (OSCEs). Given the visual nature of skin conditions, simulation is suited to recreate such skin conditions in assessments such as OSCEs. One such technique often used in simulation is moulage—the art and science of using special effects make-up techniques to replicate a wide range of conditions on Simulated Participants or manikins. However, the contextual nature of OSCEs places additional challenges compared to using moulage in more general forms of simulated-based education.Main bodyOSCEs are high-stakes assessments and require standardisation across multiple OSCE circuits. In addition, OSCEs tend to have large numbers of candidates, so moulage needs to be durable in this context. Given the need to expand the use of moulage in OSCE stations and the unique challenges that occur in OSCEs, there is a requirement to have guiding principles to inform their use and development.ConclusionInformed by evidence, and grounded in experience, this article aims to provide practical tips for health profession education faculty on how best to optimise the use of moulage in OSCEs. We will describe the process of designing an OSCE station, with a focus on including moulage. Secondly, we will provide a series of important practice points to use moulage in OSCEs—and encourage readers to integrate them into their day-to-day practice.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.