Abstract

Multimodal medical image fusion is an important task for the retrieval of complementary information from medical images. Shift sensitivity, lack of phase information and poor directionality of real valued wavelet transforms motivated us to use complex wavelet transform for fusion. We have used Daubechies complex wavelet transform (DCxWT) for image fusion which is approximately shift invariant and provides phase information. In the present work, we have proposed a new multimodal medical image fusion using DCxWT at multiple levels which is based on multiresolution principle. The proposed method fuses the complex wavelet coefficients of source images using maximum selection rule. Experiments have been performed over three different sets of multimodal medical images. The proposed fusion method is visually and quantitatively compared with wavelet domain (Dual tree complex wavelet transform (DTCWT), Lifting wavelet transform (LWT), Multiwavelet transform (MWT), Stationary wavelet transform (SWT)) and spatial domain (Principal component analysis (PCA), linear and sharp) image fusion methods. The proposed method is further compared with Contourlet transform (CT) and Nonsubsampled contourlet transform (NSCT) based image fusion methods. For comparison of the proposed method, we have used five fusion metrics, namely entropy, edge strength, standard deviation, fusion factor and fusion symmetry. Comparison results prove that performance of the proposed fusion method is better than any of the above existing fusion methods. Robustness of the proposed method is tested against Gaussian, salt & pepper and speckle noise and the plots of fusion metrics for different noise cases established the superiority of the proposed fusion method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call