Abstract

Multimodal medical image fusion aims to reduce insignificant information and improve clinical diagnosis accuracy. The purpose of image fusion is to retain salient image features and detail information of multiple source images to yield a more informative fused image. A hybrid algorithm based on both pixel and feature levels of multimodal medical image fusion is presented in this paper. For the pixel-level fusion, the source images are decomposed into low- and high-frequency components using Discrete Wavelet Transform (DWT), and then the low-frequency coefficients are fused using maximum fusion rule. Thereafter, the curvelet transform is applied on the high-frequency coefficients. The obtained high-frequency subbands (fine scale) are fused using Principal Component Analysis (PCA) fusion rule. On the other hand, the feature-level fusion is accomplished by extracting various features form the coarse and detail subbands and using them for the fusion process. These features involve mean, variance, entropy, visibility, and standard deviation. Thereafter, the inverse curvelet transform is implemented on the fused high-frequency coefficients, and finally the resultant fused image is acquired by applying the inverse DWT on the fused low- and high-frequency components. The proposed method is evaluated and implemented on different pairs of medical image modalities. The results demonstrate that the proposed method improves the quality of the final fused image in terms of Mutual Information (MI), Correlation Coefficient (CC), entropy, Structural Similarity index (SSIM), Edge Strength Similarity for Image quality (ESSIM), Peak Signal-to-Noise Ratio (PSNR), and edge-based similarity measure (QAB/F).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call