Abstract

Automated surgical video analysis promises improved healthcare. We propose novel spatial context aware combined loss function for end-to-end Encoder-Decoder training for Surgical Phase Classification (SPC) on laparoscopic cholecystectomy (LC) videos. Proposed loss function leverages on fine-grained class activation maps obtained from fused multi-layer Layer-CAM for supervised learning of SPC, obtaining improved Layer-CAM explanations. Post classification, we introduce graph theory to incorporate known hierarchies of surgical phases. We report peak SPC accuracy of 96.16%, precision of 94.08% and recall of 90.02% on public dataset Cholec80, with 7 phases. Our proposed method utilizes just 73.5% of parameters as against existing state-of-the-art methodology, achieving improvement of 0.5% in accuracy, 1.76% in precision with comparable recall, with an order less standard deviation. We also propose DNN based surgical skill assessment methodology. This approach utilizes surgical phase prediction scores from the final fully-connected layer of spatial-context aware classifier to form multi-channel temporal signal of surgical phases. Time-invariant representation is obtained from this temporal signal through time- and frequency-domain analyses. Autoencoder based time-invariant features are utilized for reconstruction and identification of prominent peaks in dissimilarity curves. We devise a surgical skill measure (SSM) based on spatial-context aware temporal-prominence-of-peaks curve. SSM values are expected to be high when executed skillfully, aligning with expert assessed GOALS metric. We illustrate this trend on Cholec80 and m2cai16-tool datasets, in comparison with GOALS metric. Concurrence in the trend of SSM with respect to GOALS metric is obtained on these test videos, making it a promising step towards automated surgical skill assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call