Abstract

As the number of robotic surgery procedures has increased, so has the importance of evaluating surgical skills in these techniques. It is difficult, however, to automatically and quantitatively evaluate surgical skills during robotic surgery, as these skills are primarily associated with the movement of surgical instruments. This study proposes a deep learning-based surgical instrument tracking algorithm to evaluate surgeons’ skills in performing procedures by robotic surgery. This method overcame two main drawbacks: occlusion and maintenance of the identity of the surgical instruments. In addition, surgical skill prediction models were developed using motion metrics calculated from the motion of the instruments. The tracking method was applied to 54 video segments and evaluated by root mean squared error (RMSE), area under the curve (AUC), and Pearson correlation analysis. The RMSE was 3.52 mm, the AUC of 1 mm, 2 mm, and 5 mm were 0.7, 0.78, and 0.86, respectively, and Pearson’s correlation coefficients were 0.9 on the x-axis and 0.87 on the y-axis. The surgical skill prediction models showed an accuracy of 83% with Objective Structured Assessment of Technical Skill (OSATS) and Global Evaluative Assessment of Robotic Surgery (GEARS). The proposed method was able to track instruments during robotic surgery, suggesting that the current method of surgical skill assessment by surgeons can be replaced by the proposed automatic and quantitative evaluation method.

Highlights

  • Most types of robotic surgery require training, with a classic learning curve eventually resulting in consistent performance [1]

  • This evaluation may be quantified by determining the motions of the surgical instruments (SIs) and calculating nine defined motion metrics related to surgical skills

  • The present study proposes a system that quantitatively assesses the surgical skills of a surgeon during robotic surgery by visual tracking of SIs using a deep learning method

Read more

Summary

Introduction

Most types of robotic surgery require training, with a classic learning curve eventually resulting in consistent performance [1]. A deep learning-based approach has been found to overcome these limitations and has been applied to several tasks during robotic surgery, such as classification [17,18], detection [19,20], segmentation [21], and pose estimation [22,23] of SIs, phase identification [24,25], and action recognition [26] These methods are limited with respect to determining the trajectory of SIs. Semantic segmentation methods applied to robotic surgery images recognize occluded instruments as a single object when the SI locations are close or overlapping [27,28]. OOvveerrvviieeww ooff tthhee ssuurrggiiccaall sskkiillll aasssseessssmmeenntt ssyysstteemm iinn rroobboottiicc ssuurrggeerryy.. This retrospective study was approved by the Institutional Review Boards of Seoul National University Hospital (IRB No H-1912-081-1088)

Surgical Procedure
Dataset
Instance Segmentation Framework
Tracker in Tracking Framework
Re-Identification in Tracking Framework
Arm-Indicator Recognition on the Robotic Surgery View
Surgical Skill Prediction Model Using Motion Metrics
Trajectory of Multiple Surgical Instruments and Evaluation
Discussion
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.