Abstract

Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms.

Highlights

  • Image processing manipulates input source images to extract the maximum possible information.The information obtained is exploited for several applications, including remote sensing, malware analysis, clinical diagnosis, etc. [1,2,3,4,5]

  • To resolve the aforementioned issues, we propose a novel algorithm for multimodal image fusion, namely Convolutional Sparse Image Decomposition (CSID), having the following contributions

  • These results demonstrate that our proposed CSID achieves higher Mutual Information (MI), EN, Feature Mutual Information (FMI), Q AB/F, and Visual Information Fidelity (VIF) scores in comparison with all the other image fusion algorithms using different datasets, i.e., Data-1 through Data-6

Read more

Summary

Introduction

Image processing manipulates input source images to extract the maximum possible information.The information obtained is exploited for several applications, including remote sensing, malware analysis, clinical diagnosis, etc. [1,2,3,4,5]. [1,2,3,4,5] The latter requires greater attention as enhanced clinical diagnosis remains the top priority around the world [6]. Imaging (MRI) are among the most extensively used imaging modalities [7,8,9] This allows radiologists to analyze the human body and generate different patterns, which are used in clinical analysis [10]. These images provide anatomical statistics [7]; the extraction of purposeful functional details from an individual image remains a critical issue. This demands multimodal image fusion, which integrates the complementary information of images from different modalities to produce an enhanced fused image through simulation, thereby providing enriched anatomical and functional information [6,7,11,12,13]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call