Abstract

Abstract Introduction Simulation is an established training modality, though its use in high-stakes assessment is under-explored. This study sought to develop and validate a novel assessment of operative competence for higher specialist trainees in general surgery. Methods Content validity was derived from a previously conducted Delphi consensus study of trainers. Eight stations were created using a combination of virtual reality and bio-hybrid models. The assessment was designed to assess performance at the end of ‘phase 2’ of training. Performance was scored using an equally weighted Objective Structured Assessment of Technical Skills (OSATS) tool and a modified Procedure Based Assessment (PBA) tool. Validity evidence was analysed in accordance with Messick's validity framework. Both ‘junior’ (ST2 - ST4) and ‘senior’ trainees (ST 5 - ST8) were included to allow for comparative analysis. Results Thirteen trainees (six ‘junior’ trainees and seven ‘senior’ trainees) were observed and scored by a total of ten assessors across eight stations. Inter-station reliability was high (Cronbach's alpha 0.81). Inter-rater reliability was acceptable (inter-class correlation coefficient 0.77). A significant difference in mean station score was observed between junior and senior trainees (44.82 vs 58.18, p = .004). Mean scores were moderately correlated with increasing training year (rs = .74, p = .004, Kendall’s tau-b .57, p = 0.009). A pass-fail score generated using borderline regression methodology resulted in all ‘senior’ trainees passing and 4/6 (66.66%) of junior trainees failing the assessment. Conclusion This study demonstrates the validity, reliability and feasibility of a novel simulation-based assessment, designed to assess the operative competence of higher specialist trainees in general surgery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call