You have accessJournal of UrologyCME1 Apr 2023PD01-10 USING SURGICAL GESTURES TO BUILD EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR SURGICAL SKILLS ASSESSMENT Mitchell G Goldenberg, Runzhuo Ma, Elyssa Y. Wong, Timothy N. Chu, Christian Wagner, and Andrew J. Hung Mitchell G GoldenbergMitchell G Goldenberg More articles by this author , Runzhuo MaRunzhuo Ma More articles by this author , Elyssa Y. WongElyssa Y. Wong More articles by this author , Timothy N. ChuTimothy N. Chu More articles by this author , Christian WagnerChristian Wagner More articles by this author , and Andrew J. HungAndrew J. Hung More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000003218.10AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Traditional assessment of surgical performance is based on the evaluation of human experts. Artificial intelligence (AI)-based assessments enhance the efficiency and scalability of these assessments but lack transparency in how these scores are calculated. Deconstructing surgical actions into their most basic movements may improve the objectivity and explainability of these assessments. We sought to understand whether a surgical gestures-trained AI algorithm can predict validated measures of technical skill, as a first step toward transparency in automated surgical skills assessment. METHODS: Data was prospectively collected from two international institutions. Videos of the nerve-spare (NS) step of Robotic-assisted radical prostatectomy (RARP) cases were blindly analyzed by trained human raters using the validated Dissection Assessment for Robotic Technique (DART) tool, and surgical gestures based on a previously published classification were separately annotated (Figure 1a). Surgeon and patient demographics for each case were abstracted. Spearman’s rank correlation was used to identify significant associations between DART skill domains and the proportion of surgical gestures used. The cohort was divided 80:20 for training and validation, and an interpretable recurrent neural network (IMV-LSTM) was used to extract information from gesture sequences to predict DART domains. RESULTS: Eighty cases were included in the analysis from 21 surgeons. Median prior experience was 450 cases (IQR 230-2000). A median of 438 discrete gestures (IQR 254-559) was identified per NS case. Grouping gestures by DART skill domains we found multiple positive and negative correlations (Figure 1a), with total DART score significantly associated with proportion of hook and clip gestures (p<.001) (Figure 1b). The neural network was able to predict DART domains, and AUCs for predicting tissue handling, tissue retraction, and efficiency were 0.64, 0.66, and 0.64 respectively (Figure 1c). CONCLUSIONS: Surgical gestures may provide a link between the objectivity and scalability of AI-based technical skill assessments, and the explainability and familiarity of human expert-based ones. As metrics of surgical performance continue to diversify, these data support a multifaceted approach in evaluation of surgical performance. Source of Funding: Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA273031. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health © 2023 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 209Issue Supplement 4April 2023Page: e67 Advertisement Copyright & Permissions© 2023 by American Urological Association Education and Research, Inc.MetricsAuthor Information Mitchell G Goldenberg More articles by this author Runzhuo Ma More articles by this author Elyssa Y. Wong More articles by this author Timothy N. Chu More articles by this author Christian Wagner More articles by this author Andrew J. Hung More articles by this author Expand All Advertisement PDF downloadLoading ...