Abstract

Introduction/Background Simulations for improving interprofessional team work are very resource intense in terms of financial and personnel costs. Although participant feedback is often strongly positive, the desired outcome is evidence of learning and improvement in clinical practice. Interprofessional operating room simulations include a surgeon, an anesthesiologist and nurses. A number of tools have been rigorously developed to assess performance of individuals and teams in operating rooms. Unfortunately, many of these tools were designed for research purposes only or require intense observer training. This development includes training observers to high inter-rater agreement before having them assess actual individual or team performance. Translational research is needed to assess the efficacy of these tools in everyday situations. Methods This mixed-Methods study investigated use of four published tools to assess specialty-specific and team behaviors in OR simulations; Anesthesiologists Non-Technical Skills (ANTS), Scrub Practitioners List of Intra-operative Non-Technical Skills (SPLINTS), Non-Technical Skills for Surgeons (NOTSS) and Objective Teamwork Assessment System (OTAS). Three of the tools, ANTS, SPLINTS and NOTSS, ask raters for one mark on each of 4 to 12 elements to characterize overall performance on a scale of poor, marginal, acceptable or good. OTAS is much more complex, with five dimensions and seven points on a scale indicating the degree of positive or negative impact of behavior from each dimension on overall team function. The scenario was a patient experiencing malignant hyperthermia during an epigastric hernia repair. Participants in each simulation included two junior residents in anesthesiology and one in surgery and three to five practicing OR nurses. Our interprofessional research team (one surgeon, one nurse, one social scientist, one research assistant and two anesthesiologists) used each of the four tools while watching a video replay of one of our interprofessional team training scenarios. The order of the videos and sequence of utilizing each tool was identical for each rater. After completing the tools for all of the videos, raters were asked several feasibility questions on a Likert scale from 1=Very low to 7=Very high. Results Inter-rater agreement was generally high. For ANTS, average rater agreement was 0.81, SPLINTS was 0.90, NOTSS was 0.96 and the average rater agreement for OTAS was 0.93. All of the instruments were felt to have similar elements and category clarity (mean=4.2). Raters were equally moderately confident that their assessments would be similar to another rater (mean=4.1). Interestingly, the OTAS instrument was felt to require the most time and mental energy to complete (mean=6.7) but most likely to result in the same ratings if used on the same video a week later (mean=4.5). All of the tools were felt to be suitable for both real-time observation and video replay. However, OTAS was felt to be the least useful for real-time observation (mean=2.4). Conclusion Rater agreement among raters with diverse backgrounds was generally very good across five videos without any training sessions. All of the instruments were felt to be suitable to assess team function but only the OTAS instrument was felt to result in consistent evaluations over raters and time. Disclosures Gordon Center, University of Miami School of Medicine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call