Multimodal medical image fusion is becoming a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this work, a two-stage decomposition technique is proposed to capture the significant features from the spatial and spectral domains as well. The first stage decomposition method employs a multicluster fuzzy c-means (FCM) algorithm to produce gross- and fine-structural components. The fine-structural components containing the edge topology are again decomposed into amplitude and phase spectra in terms of the discrete cosine transformation-based discrete Fourier transform (DCT-DFT). Finally, all parts of decomposed images are integrated with three different fusion rules. A new window-based feature quality measuring (WFQM) filter is proposed for fusing the information of gross-structural components, while the singular value decomposition (SVD) method and gray-wolf optimization-based pulse-coupled neural network (GWO-PCNN) model are applied for fusing the amplitude and phase spectrum distributions of input images. As the WFQM filter and GWO-PCNN model are designed to fuse the significant information from the spatial fuzzy plane and transformed frequency plane respectively, relative contrast degradation and spectral distortion are less in the proposed fusion method. Experimental results are reported on Harvard University datasets and some qualitative as well as quantitative measures are presented for comparing the proposed method with other state-of-art-techniques. The quantitative evaluation used peak signal-to-noise ratio (PSNR), mutual information (MI), structural similarity index measurement (SSIM), visual information fidelity (VIF), and entropy (ENT) to study its superiority.
Read full abstract