Brain tumor segmentation is an essential task for medical diagnosis and treatment planning. Multi-modal MRI provides complementary information that is essential for accurate segmentation of brain tumors, but missing modality images are a common problem in clinical practice. Existing segmentation methods often fail to generate accurate object boundaries and selectively fuse the tumor region, resulting in unreliable segmentation masks. In this work, we propose an Edge-aware Discriminative Feature Fusion Based Transformer U-Net (EA-DFFTU-Net) to segment the brain tumor effectively even in the absence of modalities. First, the MRI input data is pre-processed, and features are then extracted using a ResNet-50 encoder, which learns local spatial. We employ an Edge Feature Module (EFM) to acquire edge attention representations. These features are then transferred to the Discriminative Feature Fusion based Transformer U-Net (DFFTU-Net) that learns global contexts and captures multiscale features. We use a Discriminative Feature Fusion Module (DFFM) in the decoder of the DFFTU-Net to effectively fuse the multiscale features in order to obtain accurate segmentation. The performance of the proposed EA-DFFTU-Net segmentation method was determined by evaluating it and comparing the obtained results with those of existing brain tumor segmentation techniques.
Read full abstract