Abstract

Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi-modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi-modal brain tumor segmentation network via disentangled representation learning and region-aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi-modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region-related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call