Abstract

Clinically, proper polyp localization in colonoscopy images is crucial for early diagnosis and follow-up treatment of colorectal cancer. However, visual inspection is subjective, error-prone, and burdensome. In this paper, we propose an automated polyp segmentation method (named LFSRNet) to assist physicians to accurately segment polyps in colonoscopy images. The proposed LFSRNet follows an encoder–decoder architecture and benefits from two pivotal modules, i.e., a lesion-aware feature selection module (LFSM) and a lesion-aware feature refinement module (LFRM). Specifically, the LFSM selects lesion-aware features from the top-three highest layers of the encoder via a non-local attention mechanism and fuses them to generate the initial segmentation map for the decoder. The LFRM embedded in the decoder incorporates the guided context information and the output of LFRM from the adjacent higher layer to refine the lesion-aware features. Through top-down deep supervision, our LFSRNet can adaptively select and refine lesion-aware features and precisely localize the polyp regions. Experimental results on the Kvasir-SEG dataset (with the 80%–20% train-test split) show that LFSRNet is superior to six state-of-the-art competing methods and achieves a dice score of 0.9127, an intersection-over-union score of 0.8615, a sensitivity score of 0.9174, an accuracy score of 0.9728, an F2 score of 0.9123, and an MAE score of 0.0291, respectively. More extensive results show that LFSRNet also holds better generalization than competing methods when trained on the Kvasir-SEG dataset and tested on both the CVC-ClinicDB dataset and EndoScene dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call