Abstract

Off late, medical image fusion has emerged as an inspiring approach in merging different modalities of medical images. The fused image helps the medicos to diagnose various critical diseases quickly and precisely. This paper proposes two fusion algorithim named Multimodal Adaptive Medical Image Fusion (MAMIF) and Multimodal without Denoised Medical Image Fusion (MDMIF) and both of the method uses Non-Subsampled Shearlet Transform (NSST) and B-spline registration model. However as MAMIF uses denoise method, it provides better visually enhanced images. The presented MAMIF algorithim fuses the images without losing any vital information for the given set of real-time and public datasets. The entire fusion framework uses features extracted from NSST decomposed images by using Human Visual System (HVS) based Low Frequency (LF) sub-band fusion and Log-Gabor energy-based High Frequency (HF) sub-band fusion. The proposed framework is agnostic of source image size (pairs should be of the same size). The experiments were carried out leveraging 14 sets of image dataset that includes grayscale and color images. The performance calculation of the proposed MAMIF is evaluated based on the dataset collected from HCG hospital, Bangalore, and further validated by radiologists from the same hospital. Comparing the simulated results, the proposed adaptive model MAMIF produced superior visually fused images compared to other approaches such as MDMIF and MMDWT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call