Abstract

Recent years have witnessed tremendous success in medical image analysis using deep learning methods. However, most of the work are based on fully-supervised learning, which requires sizeable and reliable labeled data. This is one of the major bottlenecks in applying these methods into practice. To reduce the labeling burden, semi-supervised learning that enables models to be learned from both labeled data and unlabeled data has been applied in many tasks, including medical image analysis. This work presents a semi-supervised attentive mutual- info generative adversarial network (SAM-GAN) for brain tumor segmentation. SAM-GAN is injected with comprehensive attention mechanism and a novel local mutual information maximization mechanism. The attention module contains channel attention block and spatial attention block, where the channel attention block indicates what to focus, while the spatial attention block indicates where to focus. Moreover, the mutual information maximization mechanism is used to focus the network on the local dependencies of the images, leading to more powerful representation ability of the network. Experimental results on two brain tumor segmentation tasks show that the proposed SAM- GAN outperforms traditional fully-supervised methods and other semi-supervised competitors, which shows the effectiveness of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call