Abstract

The automated segmentation of polyps plays a crucial role in the early diagnosis and treatment of gastrointestinal diseases. However, due to the diversity of polyp lesions and complex imaging environment, the accurate identification of the true lesion area is challenging, especially for small polyps. The blurred boundary of polyps can also result in over or under-segmentation issues. This research proposes a boundary-guided network with two-stage transfer learning: (1) the network is trained to determine the region of interest for polyp lesions and save the initial weights; (2) transfer learning is applied to leverage the learned prior knowledge to perform fine segmentation of the region of interest. It can accurately identify the lesion area, thereby achieving good segmentations, especially for small polyps. Besides, the pyramid vision transformer is used as the feature backbone. Boundary feature extraction module (BFE), deep feature extraction module (DFE), and multi-scale fusion module (MF) are designed to generate boundary maps that guide the decoder in generating prediction maps. Experimental results show that the proposed method outperforms the comparative methods on four public datasets and a private dataset (including gastric polyps), with mDSC scores exceeding 85%. Notably, on the ETIS-Larib dataset, the mDSC score is improved by 11.7% compared to methods used for comparison.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call