Abstract

You have accessJournal of UrologyTechnology & Instruments: Surgical Education & Skills Assessment1 Apr 20111484 FACE, CONTENT, AND CONSTRUCT VALIDITY OF A NOVEL DA VINCI® SURGICAL SIMULATOR Andrew J. Hung, Pascal Zehnder, Mukul B. Patil, Casey Ng, Inderbir S. Gill, and Mihir M. Desai Andrew J. HungAndrew J. Hung Los Angeles, CA More articles by this author , Pascal ZehnderPascal Zehnder Los Angeles, CA More articles by this author , Mukul B. PatilMukul B. Patil Los Angeles, CA More articles by this author , Casey NgCasey Ng Los Angeles, CA More articles by this author , Inderbir S. GillInderbir S. Gill Los Angeles, CA More articles by this author , and Mihir M. DesaiMihir M. Desai Los Angeles, CA More articles by this author View All Author Informationhttps://doi.org/10.1016/j.juro.2011.02.1442AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail INTRODUCTION AND OBJECTIVES To evaluate the face, content, and construct validity of a novel robotic surgical simulator platform utilizing the da Vinci® Si surgeon console. METHODS Intuitive Surgical (Sunnyvale, CA) has developed a novel skills simulator for robot-assisted surgery, utilizing the da Vinci® Si surgeon console and virtual reality software designed in collaboration with Mimic Technologies (Seattle, WA). Participants were enrolled in a prospective, institutional review board-approved study. Subjects were categorized as novice (no surgical training), intermediate (surgical training but not expert robotic surgeon), or expert robotic surgeon (≥100 cases as primary surgeon). Each completed 10 virtual reality exercises with 3 repetitions. Tasks were designed to train/assess camera control and clutching, Endowrist manipulation, needle control, and needle driving. Each participant completed a post-study questionnaire (based on a 1–10 scale) to assess realism of simulator (face validity) and utility as a teaching tool (content validity). The simulator recorded participant performance based on specific metrics. Performance metrics of experts, intermediates and novices (construct validity) were compared using the Kruskal-Wallis test. RESULTS The novice cohort (n=17) were medical students. The intermediate cohort (n=23) were residents, fellows, and attending surgeons (median surgical experience 5.3 years (range 1–37) who had performed a range of 0-50 robotic console cases (median 0). The expert cohort (n=14) were a multi-disciplinary group of surgeons (urology, cardiothoracic, gynecology) who had performed a range 100-800 robotic cases (median 300). Participants rated the virtual reality and console experience as very realistic (median score 8/10), and expert surgeons rated the simulator as a useful training tool for residents (median 10/10) and fellows (9/10). Experts out-performed novices in almost all metrics (see Table), and intermediates in essential metrics. Table 1. Comparison of composite performance metric medians and ranges between groups Metric Expert Intermediate Novice p value (Expert-Intermediate) p value (Expert-Novice) N 14 23 17 Overall score (%) 88.4(70.3-93.3) 75.6(25.8-88.7) 62.8(48.0-75.5) 0.0004 <0.0001 Object drops(#) 2.5(0-21.0) 7.0(2.0-19.0) 5.0(2.0-25.0) 0.01 0.04 Economy of motion(cm) 7026.9(6023.0-8845.4) 8232.0(6861.1-24388.9) 10462.6(8190.1-14060.5) 0.0007 <0.0001 Excessive instrument force(sec) 11.0(1.5-91.0) 65.0(14.5-2397.1) 113.5(4.0-716.0) <0.0001 0.0003 Instrument collisions(#) 26.0(7.0-54.0) 48.0(21.0-349.0) 111.0(38.0-262.0) 0.0003 <0.0001 Instruments out of view(cm) 37.8(12.5-375.5) 51.9(8.2-1444.8) 117.3(7.6-607.6) 0.13 0.01 Master controller range(cm) 277.1(225.6-310.6) 310.7(233.8-391.0) 318.5(251.0-366.5) 0.01 0.001 Missed targets(#) 73.0(26.0-137.0) 92.0(59.0-430.0) 138.0(64.0-463.0) 0.01 0.0001 Time to completion(sec) 3789.4(2787.1-6597.3) 6330.8(3272.0-15902.4) 7615.9(6017.5-10282.8) 0.0002 <0.0001 CONCLUSIONS The initial study evaluating the face, content, and construct validity of a novel robotic simulator platform is presented. Importantly, we elucidated a difference between intermediate and expert surgeons on key parameters, identifying relevant points along the learning curve for robotic surgery. © 2011 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 185Issue 4SApril 2011Page: e595 Advertisement Copyright & Permissions© 2011 by American Urological Association Education and Research, Inc.MetricsAuthor Information Andrew J. Hung Los Angeles, CA More articles by this author Pascal Zehnder Los Angeles, CA More articles by this author Mukul B. Patil Los Angeles, CA More articles by this author Casey Ng Los Angeles, CA More articles by this author Inderbir S. Gill Los Angeles, CA More articles by this author Mihir M. Desai Los Angeles, CA More articles by this author Expand All Advertisement Advertisement PDF downloadLoading ...

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.