Abstract

You have accessJournal of UrologyTechnology & Instruments: Surgical Education & Skills Assessment (II)1 Apr 20131565 VALIDATION OF DRY LAB EXERCISES FOR ROBOTIC TRAINING USING GLOBAL ASSESSMENT TOOL Patrick Ramos, Jeremy Montez, Casey Ng, Matthew Dunn, Inderbir Gill, and Andrew Hung Patrick RamosPatrick Ramos Los Angeles, CA More articles by this author , Jeremy MontezJeremy Montez Los Angeles, CA More articles by this author , Casey NgCasey Ng Los Angeles, CA More articles by this author , Matthew DunnMatthew Dunn Los Angeles, CA More articles by this author , Inderbir GillInderbir Gill Los Angeles, CA More articles by this author , and Andrew HungAndrew Hung Los Angeles, CA More articles by this author View All Author Informationhttps://doi.org/10.1016/j.juro.2013.02.3095AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail INTRODUCTION AND OBJECTIVES Dry lab exercise is an inexpensive method to robotic surgical training. We evaluate dry lab modules derived from previously validated Mimic virtual reality exercises for their face, content, construct, and concurrent validity. We also evaluate the applicability of the Global Evaluative Assessment of Robotic Skills (GEARS) tool to assess dry lab performance. METHODS Participants were prospectively categorized into two groups: robot novice and expert (≥30 robot cases as primary surgeon). After standardized introduction, participants completed three virtual reality exercises using the da Vinci Skills Simulator as well as the dry lab version of each exercise (Mimic Technologies) on the da Vinci Surgical System. Simulator performance was assessed by the metrics measured by the simulator. Dry lab performance was video-evaluated blind by expert review using the six-metric GEARS tool. Participants completed a post-study questionnaire. Wilcoxon nonparametric test compared performance between groups. Spearman's correlation coefficient assessed simulation to dry lab performance. RESULTS Mean robotic case experience for novices was 0 and 200 (range 30-2000) for experts. Expert surgeons found the dry lab exercises both realistic (median score 8/10 (range 4-10)), and found them very useful for training of residents (9/10 (range 5-10)). Globally, expert surgeons completed all tasks more efficiently (212 vs. 462 sec, p<0.001) and effectively (GEARS score 26 vs. 19, p<0.001) compared to novices. Moreover, experts outperformed novices in each individual GEARS metric (p<0.001). Finally, in comparing the dry lab and simulator performance, there was a moderate correlation overall (r=0.54, p<0.001). Most simulator metrics correlated moderately-strongly with corresponding GEARS metrics (r=0.7, p<0.001). CONCLUSIONS Featured robotic dry lab exercises have face, content, construct, and concurrent validity with the corresponding virtual reality tasks. Additionally, until now, assessment of dry lab exercises has been limited to basic metrics (i.e., time to completion). In establishing construct validity of the dry lab exercises, we demonstrate the feasibility of applying the more global assessment tool GEARS for the first time with dry lab training. © 2013 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 189Issue 4SApril 2013Page: e642 Advertisement Copyright & Permissions© 2013 by American Urological Association Education and Research, Inc.MetricsAuthor Information Patrick Ramos Los Angeles, CA More articles by this author Jeremy Montez Los Angeles, CA More articles by this author Casey Ng Los Angeles, CA More articles by this author Matthew Dunn Los Angeles, CA More articles by this author Inderbir Gill Los Angeles, CA More articles by this author Andrew Hung Los Angeles, CA More articles by this author Expand All Advertisement Advertisement PDF downloadLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call