Abstract

Automatic surgical instrument segmentation is a crucial step for robotic-aided surgery. Encoder-decoder construction-based methods often directly fuse high-level and low-level features by skip connection to supplement some detailed information. However, irrelevant information fusion also increases misclassification or wrong segmentation, especially for complex surgical scenes. Uneven illumination always results in instruments similar to other tissues of background, which greatly increases the difficulty of automatic surgical instrument segmentation. The paper proposes a novel network to solve the problem. The paper proposes to guide the network to select effective features for instrument segmentation. The network is named context-guided bidirectional attention network (CGBANet). The guidance connection attention (GCA) module is inserted into the network to adaptively filter out irrelevant low-level features. Moreover, we propose bidirectional attention (BA) module for the GCA module to capture both local information and local-global dependency for surgical scenes to provide accurate instrument features. The superiority of our CGBA-Net is verified by multiple instrument segmentation on two publicly available datasets of different surgical scenarios, including an endoscopic vision dataset (EndoVis 2018) and a cataract surgery dataset. Extensive experimental results demonstrate our CGBA-Net outperforms the state-of-the-art methods on two datasets. Ablation study based on the datasets proves the effectiveness of our modules. The proposed CGBA-Net increased the accuracy of multiple instruments segmentation, which accurately classifies and segments the instruments. The proposed modules effectively provided instrument-related features for the network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call