Abstract

You have accessJournal of UrologySurgical Technology & Simulation: Training & Skills Assessment I (MP34)1 Apr 2020MP34-06 MACHINE LEARNING USING A MULTI-TASK CONVOLUTIONAL NEURAL NETWORKS CAN ACCURATELY ASSESS ROBOTIC SKILLS Jeffrey Gahan*, Ryan Steinberg, Alaina Garbens, Xingming Qu, and Eric Larson Jeffrey Gahan*Jeffrey Gahan* More articles by this author , Ryan SteinbergRyan Steinberg More articles by this author , Alaina GarbensAlaina Garbens More articles by this author , Xingming QuXingming Qu More articles by this author , and Eric LarsonEric Larson More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000000878.06AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Surgical skill evaluation relies on either direct observation or video review by humans. Both are time consuming, costly and difficult to perform at a large scale. Machine learning could allow video reviewing to economically viable at a large scale. We sought to train and evaluate multi-task convolutional neural network to predict surgical proficiency score using a validated robotic assisted suturing model. METHODS: Twenty-three videos of surgeons with varying robotic skill levels completing a validated urethrovesical anastomosis model were utilized. Global evaluation assessment of robotic skills (GEARS) scoring was assigned to each video by expert reviewers and used as training data for the machine learning algorithm. The front wrist joint, rear wrist joint, and needle driver tip of each instrument, and needle, were manually labelled in 300 frames to train a YOLO object detector (Figure 1). Each video was randomly broken down into 6 second clips, providing multiple representative clips for each video. A heatmap representing object movement based on coordinate differences over time was generated (Figure 2). Using a multi-task convolutional neural network, each clip was assigned a GEARS score. A composite GEARS score for each video was then calculated using a novel video pooling method. Model training was performed using leave-one-out cross validation. The output was the predicted score for each GEARS domain. Success was defined when the model accurately predicted the domain score within the range assigned by human experts. RESULTS: Algorithm training for 23 videos of 1-2 minute duration took 4 hours using a supercomputer. The multi-task model successfully predicted 6 of 6 domains in 65% of videos, 5 of 6 domains in 74% of videos and 4 of 6 domains in 74% of videos. CONCLUSIONS: With limited training data, a novel multi-task convolutional neural network can accurately predict surgical proficiency as assessed by GEARS scoring in a urethrovesical model. Further testing using actual operative videos is warranted. Source of Funding: None © 2020 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 203Issue Supplement 4April 2020Page: e505-e505 Advertisement Copyright & Permissions© 2020 by American Urological Association Education and Research, Inc.MetricsAuthor Information Jeffrey Gahan* More articles by this author Ryan Steinberg More articles by this author Alaina Garbens More articles by this author Xingming Qu More articles by this author Eric Larson More articles by this author Expand All Advertisement PDF downloadLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call