ABSTRACTThroughout the past 20 years, medical imaging has found extensive application in clinical diagnosis. Doctors may find it difficult to diagnose diseases using only one imaging modality. The main objective of multimodal medical image fusion (MMIF) is to improve both the accuracy and quality of clinical assessments by extracting structural and spectral information from source images. This study proposes a novel MMIF method to assist doctors and postoperations such as image segmentation, classification, and further surgical procedures. Initially, the intensity‐hue‐saturation (IHS) model is utilized to decompose the positron emission tomography (PET)/single photon emission computed tomography (SPECT) image, followed by a hue‐angle mapping method to discriminate high‐ and low‐activity regions in the PET images. Then, a proposed structure feature adjustment (SFA) mechanism is used as a fusion strategy for high‐ and low‐activity regions to obtain structural and anatomical details with minimum color distortion. In the second step, a new multi‐discriminator generative adversarial network (MDcGAN) approach is proposed for obtaining the final fused image. The qualitative and quantitative results demonstrate that the proposed method is superior to existing MMIF methods in preserving the structural, anatomical, and functional details of the PET/SPECT images. Through our assessment, involving visual analysis and subsequent verification using statistical metrics, it becomes evident that color changes contribute substantial visual information to the fusion of PET and MR images. The quantitative outcomes demonstrate that, in the majority of cases, the proposed algorithm consistently outperformed other methods. Yet, in a few instances, it achieved the second‐highest results. The validity of the proposed method was confirmed using diverse modalities, encompassing a total of 1012 image pairs.
Read full abstract