Abstract

Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic—acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors—or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU–ISI Gesture and Skill Assessment Working Set dataset—co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy—averaged over five cross-validation trials—for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features—such as image-based orientation and image-based collision detection—or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.

Highlights

  • Taking the kinematic data of the JHU–ISI Gesture and Skill Assessment Working Set (JIGSAWS) set as input [29], they implemented 9 different evaluation methods to provide a benchmark for skill assessment solutions, categorising surgical skills into expert, intermediate and novice classes

  • We used the video data of the JIGSAWS dataset and evaluated the previously described benchmark methods. These data consist of recordings of eight subjects, with varying surgical experience, performing each of the three surgical subtasks (Knot-Tying, NeedlePassing and suturing) on the da Vinci Surgical System

  • Our method is similar in principle to their Space Temporal Interest Points (STIP) method, as it tracks the movement of interest points/features over time

Read more

Summary

Introduction

Invasive Surgery (MIS) is a collection of surgical techniques that aim to limit the size of incisions and tissue trauma, in order to decrease the recovery time, the inflicted pain and the risk of infections during surgeries. The introduction of MIS has revolutionised operations in the past 50 years [1]. Twenty-five years ago, with the introduction of robotic telesurgical systems, a new form of MIS was born: the Robot-Assisted Minimally Invasive. Surgery (RAMIS) [2,3] External sensors such as sensory gloves have been used to evaluate surgical skill in open surgery [4]. For MIS, mounted flex sensors have been employed [5] or the Fundamentals of Robotic Surgery (FRS) Dome was used to gather data [6]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call