Abstract

Arising from low contrast, high similarity, and different scales between diverse tissues in 2D medical images, it is challenging to accurately segment the regions of interest. To address these issues, we propose a context hierarchical integrated network named CHI-Net for medical image segmentation, which can accurately segment salient regions from medical images in a purely task-driven manner. The proposed CHI-Net consists of two key modules, i.e., a dense dilated convolution (DDC) and a stacked residual pooling (SRP). Specifically, The DDC module can capture substantial complementary features hierarchically by combining four cascaded branches of hybrid dilated convolutions, which is conducive to extracting the features of diverse scales. The SRP module integrates encoder detail features by multiple effective field-of-views, which aims at generating more discriminative features. Extensive experimental results on five benchmark datasets with different objects illustrate that the proposed CHI-Net is superior to the state-of-the-art object segmentation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call