Abstract

Accurate brain tumor segmentation is an essential step for clinical diagnosis and surgical treatment. Multimodal brain tumor segmentation strongly relies on an effective fusion method and an excellent segmentation network. However, it is common to have some missing MR modalities in clinical scenarios due to image corruption, acquisition protocol, scanner availability and scanning cost, which can heavily decrease the tumor segmentation accuracy, and also cause information loss for down-streaming disease analysis. To address this issue, I propose a novel multimodal feature fusion and latent feature learning guided deep neural network. On the one hand, the proposed network can help to segment brain tumors when one or more modalities are missing. On the other hand, it can retrieve the missing modalities to compensate for incomplete data. The proposed network consists of three key components. First, a Multimodal Feature Fusion Module (MFFM) is proposed to effectively fuse the complementary information from different modalities, consisting of a Cross-Modality Fusion Module (CMFM) and a Multi-Scale Fusion Module (MSFM). Second, a Spatial Consistency-based Latent Feature Learning Module (SC-LFLM) is presented to exploit multimodal latent correlation and extract the relevant features to benefit segmentation. Third, the Multi-Task Learning (MTL) paths are integrated to supervise the segmentation and recover the missing modalities. The proposed method is evaluated on BraTS 2018 dataset, and it can achieve superior segmentation results when one or more modalities are missing, compared with the state-of-the-art methods. Furthermore, the proposed modules can be easily adapted to other multimodal network architectures and research fields.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call