Abstract

You have accessJournal of UrologyTechnology & Instruments: Surgical Education & Skills Assessment I1 Apr 2014PD6-03 MULTI-INSTITUTIONAL VALIDATION OF FUNDAMENTAL INANIMATE ROBOTIC SKILLS TASKS (FIRST) Monty Aghazadeh, Miguel Mercado, Andrew Hung, Mihir Desai, Inderbir Gill, Brian Dunkin, and Alvin Goh Monty AghazadehMonty Aghazadeh More articles by this author , Miguel MercadoMiguel Mercado More articles by this author , Andrew HungAndrew Hung More articles by this author , Mihir DesaiMihir Desai More articles by this author , Inderbir GillInderbir Gill More articles by this author , Brian DunkinBrian Dunkin More articles by this author , and Alvin GohAlvin Goh More articles by this author View All Author Informationhttps://doi.org/10.1016/j.juro.2014.02.511AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail INTRODUCTION AND OBJECTIVES Our group has previously reported on the development and validation of FIRST, a series of 4 inanimate robotic skills training tasks. Expanding on initial validation, we now demonstrate face, content, and construct validity of these tasks in a large multi-institutional cohort of experts and trainees. METHODS Ninety-six residents and attending surgeons were enrolled at participating institutions between 2011-2013. After watching an instructional video, participants completed each task in succession. Performance metrics were based on accuracy and efficiency. Face and content validity were derived from participants’ and experts’ rating of the tasks on a 5-point Likert scale, evaluating 1) difficulty, 2) similarity to skills required for robotic surgery, 3) usefulness for skills evaluation, 4) usefulness in skills training, and 5) requirement for proficiency. For statistical analysis, participants were grouped based on robotic experience into novice (<5 robotic cases as primary surgeon), intermediate (≥5 but ≤30), and expert (>30) groups. RESULTS Forty-nine novice, 22 intermediate, and 23 expert surgeons were assessed across all four inanimate robotics skills tasks. Median number of robotic cases (range) performed by the novice, intermediate, and expert groups were 0 (0-3), 10 (5-30), and 200 (55-2000), respectively [p<0.001]. Not only did the expert and intermediate groups reliably outperform novices, but experts also outperformed intermediates in all exercises (see Figure 1). Face Validity: Of all participants, 75% agreed that the tasks were an appropriate level of difficulty and 84% agreed that the necessary technical skills reflect robotic surgery skills. Content Validity: Of expert participants, 95% agreed that the tasks were useful for skills evaluation; 100% agreed that the tasks were useful for training and that a skilled robotic surgeon should be able to perform all the tasks presented. CONCLUSIONS In this study we confirm the face, content, and construct validity of four inanimate robotic training tasks in a multi-institutional cohort. We demonstrate that FIRST are reliably able to discern between expert, intermediate, and novice surgeons. Validation data from this large multi-institutional cohort is useful as we incorporate FIRST into a comprehensive robotic training curriculum. © 2014FiguresReferencesRelatedDetails Volume 191Issue 4SApril 2014Page: e130-e131 Advertisement Copyright & Permissions© 2014MetricsAuthor Information Monty Aghazadeh More articles by this author Miguel Mercado More articles by this author Andrew Hung More articles by this author Mihir Desai More articles by this author Inderbir Gill More articles by this author Brian Dunkin More articles by this author Alvin Goh More articles by this author Expand All Advertisement Advertisement PDF downloadLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call