Abstract

Abstract Automatic surgical skill assessment in robotic surgery based on video data is essential in facilitating faster learning curves for trainees while relieving expert surgeons from the time- and cost-intensive feedback process. Recent years have shown several advancements in this area by utilizing deep learning. While current research focuses on novel architectures, the influence of video-preprocessing on their performance remains unknown. In this work, we present the first investigation on the influence of video-preprocessing on deep learning-based surgical skill assessment. Thus, we integrated four preprocessing modules, i.e. Deblurring, Segmentbased Sampling, Optical Flow and the Combination of all of them, into skill assessment on the JIGSAWS dataset using a well-established network architecture. Despite all single preprocessing steps showing no clear improvement, the Combination of all steps showed higher median performance and lower variance. Furthermore, we performed frame-wise investigations on the influence of optical flow artifacts and their reduction in the combined setting. Our results highlight the potential of well-calibrated video-preprocessing for automatic surgical skill assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call