Abstract
Accurate brain tumour segmentation is critical for tasks such as surgical planning, diagnosis, and analysis, with magnetic resonance imaging (MRI) being the preferred modality due to its excellent visualisation of brain tissues. However, the wide intensity range of voxel values in MR scans often results in significant overlap between the density distributions of different tumour tissues, leading to reduced contrast and segmentation accuracy. This paper introduces a novel framework based on conditional generative adversarial networks (cGANs) aimed at enhancing the contrast of tumour subregions for both voxel-wise and region-wise segmentation approaches. We present two models: Enhancement and Segmentation GAN (ESGAN), which combines classifier loss with adversarial loss to predict central labels of input patches, and Enhancement GAN (EnhGAN), which generates high-contrast synthetic images with reduced inter-class overlap. These synthetic images are then fused with corresponding modalities to emphasise meaningful tissues while suppressing weaker ones. We also introduce a novel generator that adaptively calibrates voxel values within input patches, leveraging fully convolutional networks. Both models employ a multi-scale Markovian network as a GAN discriminator to capture local patch statistics and estimate the distribution of MR images in complex contexts. Experimental results on publicly available MR brain tumour datasets demonstrate the competitive accuracy of our models compared to current brain tumour segmentation techniques
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.