Abstract

Accurate polyp segmentation during colonoscopy examinations can help the clinicians accurately locate polyp areas for further diagnosis or surgeries and thereby decrease the chances of polyps growing into cancer. Although existing approaches can achieve significant improvement by multi-scale feature learning, attention/contextual augmentation, and deep supervision, polyp segmentation is still far from being solved. Actually, enhancing the capability of feature representation may be an excellent way to improve the polyp segmentation performance. From this perspective, we propose a simple but strong framework over feature pyramid network (FPN), called Feature Augmented Pyramid Networks (FAPN), for accurate polyp segmentation with augmented feature representation. Specifically, FAPN consists of three components: Cross-Embedding Module (CEM), Predictive Calibration Module (PCM), and Hierarchical Feature Fusion Module (HFFM). CEM is a two-stage fusion approach that first performs an interactive embedding of multi-level features followed by a second fusion, thus enhancing the fused feature representation. After fusing, PCM leverages the predicted probability maps of each stage (after supervised optimization) to calibrate the fused feature representations, which effectively highlights the regions of interest while avoiding the interference of irrelevant information. Finally, HFFM sequentially combines features from each stage in the top-down pathway, yielding a more robust multi-scale feature representation that allows the framework to segment polyps more accurately. Extensive experiments demonstrate that the proposed network performs favorably against more than a dozen of state-of-the-art methods on five popular polyp segmentation benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call