Abstract

Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call