Abstract

Medical image fusion is the process of combining information from multiple medical images of the same body region acquired using different imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) into a single image. It is widely used in clinical applications such as oncology, neurology, and cardiology. Deep learning-based approaches have been used to solve this problem. However, some medical image fusion approaches based on pre-trained models still have limitations because these models may not be trained on a diverse set of medical images, which may lead to inefficiencies in extracting features from medical images. Therefore, it is not effective to use feature-based fusion rules extracted from a pre-trained model. In this study, we use the transfer learning technique to build a modified VGG19 model (called TL_VGG19) and use this model to extract features as well as build an efficient fusion rule for detail components. Furthermore, to ensure that the composite image is not degraded in terms of quality. We propose an adaptive fusion method for the base components based on the Equilibrium optimization algorithm (EOA). The seven latest synthesis methods were used for comparison. The experimental results clearly indicate that the methodology employed in this study outperforms existing the latest methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.