Abstract

Surgical gestures detection can provide targeted, automated surgical skill assessment and feedback during surgical training for robot-assisted surgery (RAS). Several sources including surgical videos, robot tool kinematics, and an electromyogram (EMG) have been proposed to reach this goal. We aimed to extract features from electroencephalogram (EEG) data and use them in machine learning algorithms to classify robot-assisted surgical gestures. EEG was collected from five RAS surgeons with varying experience while performing 34 robot-assisted radical prostatectomies over the course of three years. Eight dominant hand and six non-dominant hand gesture types were extracted and synchronized with associated EEG data. Network neuroscience algorithms were utilized to extract functional brain network and power spectral density features. Sixty extracted features were used as input to machine learning algorithms to classify gesture types. The analysis of variance (ANOVA) F-value statistical method was used for feature selection and 10-fold cross-validation was used to validate the proposed method. The proposed feature set used in the extra trees (ET) algorithm classified eight gesture types performed by the dominant hand of five RAS surgeons with an accuracy of 90%, precision: 90%, sensitivity: 88%, and also classified six gesture types performed by the non-dominant hand with an accuracy of 93%, precision: 94%, sensitivity: 94%.

Highlights

  • IntroductionPrevious work on skill evaluation in Robot-Assisted Surgery (RAS) mainly exploited kinematic data recorded by the robot and global measurements of the task

  • We proposed functional brain network and power features to be used in machine learning algorithms to classify surgical gestures performed in the operating room (OR) for dominant and non-dominant hands

  • Classification results for different machine learning methods and different number of selected features are represented in Figure 2 for dominant and non-dominant hands

Read more

Summary

Introduction

Previous work on skill evaluation in RAS mainly exploited kinematic data recorded by the robot and global measurements of the task These measurements include time to completion [5,6], speed and number of hand movements [5], distance travelled [6], and force and torque signatures [6,7,8]. They perform a global assessment about skill level and neglect the fact that a surgical task is composed of several different gestures These skill evaluation methods have two main shortcomings: First, they use a single model for a whole complex task, while segmenting a task into gestures will allow for the use of a simpler model for each gesture. Those methods assume that a trainee is either skilled or unskilled at performing a whole task, while a trainee may be skilled in performing some segments of the task and unskilled in performing other segments as a complexity level is different for a gesture

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call