Abstract

In clinical practice, automatic polyp segmentation from colonoscopy images is an effective assistant manner in the early detection and prevention of colorectal cancer. This paper proposed a new deep model for accurate polyp segmentation based on an encoder-decoder framework. ResNet50 is adopted as the encoder, and three functional modules are introduced to improve the performance. Firstly, a hybrid channel-spatial attention module is introduced to reweight the encoder features spatially and channel-wise, enhancing the critical features for the segmentation task while suppressing irrelevant ones. Secondly, a global context pyramid feature extraction module and a series of global context flows are proposed to extract and deliver the global context information. The former captures the multi-scale and multi-receptive-field global context information, while the latter explicitly transmits the global context information to each decoder level. Finally, a feature fusion module is designed to effectively incorporate the high-level features, low-level features, and global context information, considering the gaps between different features. These modules help the model fully exploit the global context information to deduce the complete polyp regions. Extensive experiments are conducted on five public colorectal polyp datasets. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call