Abstract

Colorectal polyps are known to be potential precursors to colorectal cancer. Effective polyp segmentation during colonoscopy examinations can help the clinicians accurately locate potential polyp areas, and reduce misdiagnosis and missed diagnosis. Although existing approaches have achieved significant breakthroughs in medical image segmentation, polyp segmentation is still far from being solved. This is mainly due to the following reasons: (1) most of them tend to ignore the feature misalignment issues during the feature aggregation process; and (2) few algorithms explicitly consider the impact of boundary information on the performance of polyp segmentation. To solve the above issues, we formulate a novel neural network for polyp segmentation in endoscopy images. Different from existing approaches, we explore a new paradigm to enhance multi-level feature fusion by introducing the Feature Fusion Module, which leverages the semantic offset field learned to align the multi-level feature maps for resolving the feature misalignment issue. In addition, we design an auxiliary boundary branch to focus on boundary-aware information, and thus boost the performance of boundary prediction. Specifically, the reference boundary map learned through end-to-end optimization can be considered as a complementary feature of the high-level semantics representation, and then integrated into the main branch via the Boundary Embedding Module, which in turn promotes further refinement of the prediction, especially the boundary. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in polyp segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call