ABSTRACTIn clinical practice, radiologists diagnose brain tumors with the help of different magnetic resonance imaging (MRI) sequences and judge the type and grade of brain tumors. It is hard to realize the brain tumor computer‐aided diagnosis system only with a single MRI sequence. However, the existing multiple MRI sequence fusion methods have limitations in the enhancement of tumor details. To improve fusion details of multi‐modality MRI images, a novel conditional generative adversarial fusion network based on three discriminators and a Staggered Dense Residual2 (SDR2) module, named SDR2Tr‐GAN, was proposed in this paper. In the SDR2Tr‐GAN network pipeline, the generator consists of an encoder, decoder, and fusion strategy that can enhance the feature representation. SDR2 module is developed with Res2Net into the encoder to extract multi‐scale features. In addition, a Multi‐Head Spatial/Channel Attention Transformer, as a fusion strategy to strengthen the long‐range dependencies of global context information, is integrated into our pipeline. A Mask‐based constraint as a novel fusion optimization mechanism was designed, focusing on enhancing salient feature details. The Mask‐based constraint utilizes the segmentation mask obtained by the pre‐trained Unet and Ground Truth to optimize the training process. Meanwhile, MI and SSIM loss jointly improve the visual perception of images. Extensive experiments were conducted on the public BraTS2021 dataset. The visual and quantitative results demonstrate that the proposed method can simultaneously enhance both global image quality and local texture details in multi‐modality MRI images. Besides, our SDR2Tr‐GAN outperforms the other state‐of‐the‐art fusion methods regarding subjective and objective evaluation.
Read full abstract