Background and ObjectiveSurgical skill assessment aims to objectively evaluate and provide constructive feedback for trainee surgeons. Conventional methods require direct observation with assessment from surgical experts which are both unscalable and subjective. The recent involvement of surgical robotic systems in the operating room has facilitated the ability of automated evaluation of the expertise level of trainees for certain representative maneuvers by using machine learning for motion analysis. The features extraction technique plays a critical role in such an automated surgical skill assessment system. MethodsWe present a direct comparison of nine well-known feature extraction techniques which are statistical features, principal component analysis, discrete Fourier/Cosine transform, codebook, deep learning models and auto-encoder for automated surgical skills evaluation. Towards near real-time evaluation, we also investigate the effect of time interval on the classification accuracy and efficiency. ResultsWe validate the study on the benchmark robotic surgical training JIGSAWS dataset. An accuracy of 95.63, 90.17 and 90.26% by the Principal Component Analysis and 96.84, 92.75 and 95.36% by the deep Convolutional Neural Network for suturing, knot tying and needle passing, respectively, highlighted the effectiveness of these two techniques in extracting the most discriminative features among different surgical skill levels. ConclusionsThis study contributes toward the development of an online automated and efficient surgical skills assessment technique.
Read full abstract