Abstract
ABSTRACT In medical applications, effective medical diagnosis is developed by combining medical images from various morphologies using medical image fusion techniques. An accurate diagnosis cannot be made from a single modality image, and the existing approaches need better efficiency due to poor image quality and inconsistent performance. To solve this problem, this research proposes an effective artificial intelligence model based on multi-modal medical image fusion. This research develops a new method based on medical images using deep residual neural networks (ResNet-50) and a DarkNet-19. The optimised discrete wavelet transform (ODWT) is used for image decomposing. The original image’s high- and low-frequency coefficients are decomposed by using ODWT. The low-frequency coefficient is then fused using the Modified ResNet-50 model. After that, DarkNet-19 fuses the high-frequency coefficients, and the input stimulus for DarkNet-19 is a high-frequency coefficient’s modified average gradient. The inverse ODWT is employed to reconstruct the fused image finally. The proposed fusion model’s effectiveness is assessed using the various CT-MRI, CT-PET, and MRI-SPECT imaging datasets. The proposed medical image fusion approach attains a maximum performance for PSNR of 40.03 dB, Fusion Factor of 6.1025, Fusion Symmetry of 0.0869, and Visual Information Fidelity of 0.121.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.