Abstract

Semantic segmentation methods based on deep learning have provided the state-of-the-art performance in recent years. Based on deep learning, many Convolutional Neural Network (CNN) models have been proposed. Among them, U-Net with the simple encoder and decoder structure, can learn multi-scale features with various context information and has become one of the most popular neural network architectures for medical image segmentation. To reuse the features with the detail image structure in the encoder path, U-Net utilizes a skip-connection structure to simply copy the low-level features in the encoder to the decoder, and cannot explore the correlations between two paths and different scales. This study proposes a multi-scale context interaction learning network (MCIU-net) for medical image segmentation. First, to effectively fuse the features with detail structure in the encoder path and more semantic information in the decoder path, we conduct interaction learning on the corresponding scale via the bi-directional ConvLSTM (BConvLSTM) unit. Second, the interaction learning among all blocks of the decoder path is also employed for dynamically merging multi-scale contexts. We validate our proposed interaction learning network on three medical image datasets: retinal blood vessel segmentation, skin lesion segmentation, and lung segmentation, and demonstrate promising results compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call