Abstract

Accurate automatic image segmentation is important in medical image analysis. A perfect segmentation using fully convolutional network (FCN) means an accurate classification of each pixel. However, it is still a great challenge to accurately differentiate edge pixels from neighborhood pixels in weak edge regions. Many previous segmentation methods have focused on edge information to mitigate weak edge problems, but the more important neighborhood information is undervalued. To tackle this problem, in this paper, we propose a novel yet effective Edge and Neighborhood Guidance Network (ENGNet). Specifically, instead of just utilizing the edge information as the shape constraints, the edge and neighborhood guidance (ENG) module is designed to exploit the edge information and fine-grained neighborhood spatial information simultaneously, so as to improve the ability of network to classify edge pixels and neighborhood pixels in weak edge regions. Moreover, the ENG modules are adopted in different scales to learn sufficient feature representations of edge and neighborhood. To extract complementary features more effectively in channel dimension, we also design a multi-scale adaptive selection (MAS) module at channel-wise to extract multi-scale context information and adaptively fuse different-scale features. Two 2D public segmentation datasets including skin lesion dataset and endoscopic polyp dataset are used to evaluate the performance of the proposed ENGNet. Experimental results demonstrated that by exploiting edge information and neighborhood spatial information in different scales simultaneously, the proposed ENGNet can effectively alleviate the misclassification in weak edge regions and achieve better performance than other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call