Abstract

The performance of deep learning-based medical image segmentation methods largely depends on the segmentation accuracy of tissue boundaries. However, since the boundary region is at the junction of areas of different categories, the pixels located at the boundary inevitably carry features belonging to other classes and difficult to distinguish. This paper proposes a fine-grained contextual modeling network for medical image segmentation based on boundary semantic features, FBCU-Net, which uses the semantic features of boundary regions to reduce the influence of irrelevant features on boundary pixels. First, based on the discovery that indistinguishable pixels are usually boundary pixels in medical images, we introduce new supervision information to find and classify boundary pixels. Second, based on the existing relational context modeling schemes, we generate the boundary region representations representing the semantic features of boundary regions. Last, we use boundary region representations to reduce the influence of irrelevant features on boundary pixels and generate highly discriminative pixel representations. Furthermore, to enhance the attention of the network to the boundary region, we also propose the boundary enhancement strategy. We evaluate the proposed model on five datasets, TUI (Thyroid Tumor), ISIC-2018 (Dermoscopy), 2018 Data Science Bowl (Cell Nuclei), Glas (Colon Cancer), and BUSI (Breast Cancer). The results show that FBCU-Net has better boundary segmentation performance and overall performance for different medical images than other state-of-the-art (SOTA) methods, and has great potential for clinical application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call