Abstract

Medical practitioners often have to work with images which come from various modalities, ranging from X-ray-based computed tomography (CT) to radio wave-based magnetic resonance imaging (MRI). Each image modality provides different information. Multimodal image fusion is the process of merging images of different modalities to obtain a single image that carries almost all the complementary as well as the redundant details to form an image containing much more information. This single image carrying information of different modalities is rather useful for medical practitioners and researchers for analyzing a patient's body to detect lesions (if any) and to make a correct diagnosis. Feature extraction plays a key role when it comes to image fusion for multimodal image data, and with that in mind, convolutional neural networks have been extensively used in the literature of image fusion for some time now. However, not many of the deep learning-based models have been specifically designed for medical images. With that motivation the chapter is divided into two parts. The first will be a comprehensive review of some of the works that have been published recently in the field of multimodal image fusion. In the second, inspired by a few of the methods discussed, an unsupervised deep learning-based medical image fusion architecture incorporating multiscale feature extraction will be proposed. Finally, extensive experiments on various multimodal medical images are implemented to analyze the performance, stability, and superiority of the proposed technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call