Abstract

Brain tumor segmentation in multi-modal magnetic resonance images is an essential step in brain cancer diagnosis and treatment. Although the recent multi-modal fusion network has achieved impressive performance in brain tumor segmentation, we usually encounter the situations where certain acquired modalities cannot be obtained in advance in clinical practice. In this paper, we propose an advanced network composed of dual residual multi variational auto-encoder and the sub-model distribution loss, which is robust to the absence of any one modality in brain tumor segmentation. This network implements the information merging in both encoder and decoder through this dual residual multi variational auto-encoder and embeds it in latent space, and decodes the features in a residual form. In this way, the features as the input of the decoder will be consistent and the difficulty of learning will be reduced. We evaluate this network on BraTS2018 using subsets of the imaging modalities as input. The experimental results show that our method could achieve better segmentation accuracy compared with the current state-of-the art method UHVED.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call