Abstract

You have accessJournal of UrologySurgical Technology & Simulation: Training & Skills Assessment II (MP47)1 Apr 2020MP47-17 DEEP LEARNING MODELS TO PREDICT PSYCHOMOTOR ERRORS USING RAW KINEMATIC DATA FROM VIRTUAL REALITY SIMULATOR Andrew Hung*, Aastha, Jessica Nguyen, and Yan Liu Andrew Hung*Andrew Hung* More articles by this author , Aastha, Jessica NguyenAastha, Jessica Nguyen More articles by this author , and Yan LiuYan Liu More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000000902.017AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Kinematic performance metrics during robotic surgery have been linked to clinical outcomes. Thus far, the kinematic data we have analyzed, in the form of automated performance metrics (APMs), have been summarized data over specific steps of a surgical procedure and focused on economy of movement (i.e., efficiency). Herein, we evaluate for the first time raw (unprocessed) kinematic data during robotic simulation, utilizing artificial intelligence (AI) methods to predict Mimic’s composite scoring and select psychomotor errors. METHODS: Our analysis of raw kinematic data centered around a single needle driving task (“Basic Suture Sponge”) on the Mimic Technologies FlexVR platform. Eleven participants (surgeons/non-surgeons) completed the simulation exercise 5-11 times each. For each exercise, spatial x,y,z coordinates of the camera and instruments were collected at 30 Hz. This logged data was then used to infer “micro-displacements” along each degree of freedom and overall spatial-temporal instrument trajectory during the exercise. Psychomotor errors and composite “M Score” were reported by the simulator. We utilized several sequential deep learning algorithms, including Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), and Long Short Term Memory (LSTM) networks to map the raw data and to predict the composite score and general surgical skills: needle targeting and instrument collisions. RESULTS: 60 simulation sessions were divided into 37, 12, and 11 - training, validation, and testing cohorts, respectively. The mean ±SD composite “M score” was 925.75 ±324.85. Predicting composite simulation score: the LSTM network was able to best predict the composite “M score” with a mean absolute error 273.09 (29%). Predicting Mistargeting of Needle: GRU and RNN were able to learn best from the dataset (accuracy=76%; p=0.38), achieving 6% more accuracy than the base ratio of classes in the dataset (30% perfect/near perfect needle targeting [0-1 misses] vs. 70% imperfect targeting [>1 misses]). Predicting Instrument Collision: RNN outperformed every other model in this exercise with 64.7% accuracy (11.1% more than the base classifier, p=0.18). LSTMs and GRUs were up next with 60% and 58.8% accuracy. CONCLUSIONS: Our preliminary results suggest that the “micro-displacements data” captured during a simulation exercise can at minimum moderately capture aspects of suturing psychomotor skill. With further optimization, AI methods may be able to automate psychomotor skills assessment. Source of Funding: none © 2020 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 203Issue Supplement 4April 2020Page: e691-e691 Advertisement Copyright & Permissions© 2020 by American Urological Association Education and Research, Inc.MetricsAuthor Information Andrew Hung* More articles by this author Aastha, Jessica Nguyen More articles by this author Yan Liu More articles by this author Expand All Advertisement PDF downloadLoading ...

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.