Abstract

You have accessJournal of UrologyCME1 Apr 2023PD30-01 USING MACHINE LEARNING TO CLASSIFY PROCEDURE-SPECIFIC SURGICAL EXPERIENCE BASED ON SURGICAL GESTURE RECOGNITION IN A RADICAL PROSTATECTOMY SIMULATION Nathan Schuler, Lauren Shepard, Tyler Holler, Patrick Saba, and Ahmed Ghazi Nathan SchulerNathan Schuler More articles by this author , Lauren ShepardLauren Shepard More articles by this author , Tyler HollerTyler Holler More articles by this author , Patrick SabaPatrick Saba More articles by this author , and Ahmed GhaziAhmed Ghazi More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000003316.01AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Effectiveness of nerve sparing robot assisted radical prostatectomy (NS-RARP) is mainly based on recovery of postoperative erectile function. Previously, robotic kinematic data differentiated surgical expertise in suturing tasks and predicted continence rates. Using a validated realistic NS-RARP simulation with embedded sensors measuring torque on neurovascular bundles (NVB), our objective was to apply machine learning algorithms to classify surgical experience based on pure surgical gesture data inputs. METHODS: 50 certified urologists with average robotic case volumes (RV) of 1206 (range 100 - >5000) completed surveys and a NS-RARP simulation, during which video and force sensor data were collected. Videos were annotated for ten approved individual surgical gestures (e.g., cut, traction etc.) and aggregated based on instrument used and combinations of multiple gestures. Total force, average force, and number of force peak events were calculated. With a total of 33 inputs, output included total/NS-RARP case volume and NVB force data. Data was fit to a Gaussian Mixture model, then separated based on gesture utilization patterns. Tukey’s HSD test was used to determine significance in inter-group comparisons of RV and force metrics. RESULTS: Participants were clustered into 3 groupings based on mean total RV: 14 Super-Users (SU) (2221), 14 High Volume (HV) (1017), and 5 Low Volume (LV) (110) urologists (Figure 1). Significant differences were found between SU and HV for both Robotic case (p=0.03) and RARP volumes (p=0.008). Gesture pattern comparisons showed significant differences between SU vs HV and HV v LV in 64% and 30% of gesture inputs respectively. Force sensor comparisons showed significant differences in Total Force and Force Peak events between SU vs LV (p=0.014; p=0.021) and HV vs LV (p=0.021; p=0.046) groups. No significant differences in forces between SU and HV were found. CONCLUSIONS: This machine learning algorithm successfully categorized surgical experience into caseload and force applied, based solely on gesture inputs within a realistic simulated NS-RARP task. Differences in gestures differentiated HV from SU urologists, alluding to potential for targeted areas of improvement in gesture patterns, even for HV surgeons. Source of Funding: None © 2023 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 209Issue Supplement 4April 2023Page: e830 Advertisement Copyright & Permissions© 2023 by American Urological Association Education and Research, Inc.MetricsAuthor Information Nathan Schuler More articles by this author Lauren Shepard More articles by this author Tyler Holler More articles by this author Patrick Saba More articles by this author Ahmed Ghazi More articles by this author Expand All Advertisement PDF downloadLoading ...

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.