Abstract

In recent years, with rapid development and advancement in technology and instrumentation, medical image processing is a hot area of research because of its vital role in health sector. Multimodal medical image fusion, which covers a wide range of methods to deal with medical issues presented by the images of human body, organs and cells, is playing a great role in diagnostics and treatment of many complex diseases associated with brain, spine, and kidney like prostate cancer, Alzheimer’s, glioma, vertebrae labeling, and renal transplant. Multimodal medical image fusion is the process of amalgamating numerous images from single or multiple imaging modalities such as positron emission tomography, single photon emission computed tomography, computed tomography, and magnetic resonance imaging into a single distinct image with more detailed anatomical and spectral information. The main focus of image fusion is to improve the quality of an image while preserving the most desirable and relevant characteristics of each in order to make the image more usable for clinical diagnosis and treatment procedure. The common thread of numerous techniques listed in literature based on feature processing, machine learning, and sparse representation is the ability to learn informative characteristics that portray the patterns and regularities which are intrinsic to data. The need of the hour is to have more robust and self-trained methods of image fusion in the area of healthcare, deep learning (DL) is one of them. This chapter depicts DL-based techniques that necessitate merely a bunch of data with negligible preprocessing, and determine the revealing depictions in a self-trained approach, thereby shifting the load of feature engineering from the human race to automatic powerful machines known as computers that lead to wonderful results for better diagnostics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call