Abstract

Multimodal medical image fusion is highly essential to minimize the redundancy rate at the time of getting the required information through the input images by diverse medical imaging sensors. The main intention is to get an individual fused image that is highly informative for supporting the clinical evaluation. In this research work, Two Stage Multi-modal Medical Image Fusion is presented via the Cascaded Optimal Dual-Tree Complex Wavelet Transform (O-DTCWT) and the Non-Sub-sampled Shearlet Transform with two different medical image modalities. In the first stage, the collected image 1 and image 2 are separately given to the Optimal DTCWT for signal decomposition, which divides the high and low-frequency components. The high-frequency components are fused via fuzzy logic, whereas the Maximum rule is used to fuse the low frequency components. Then, the fused images are given to the inverse DTCWT for image reconstruction. In image fusion process, the DTCWT method is suitable for shift variance and multi-dimensionality. After, the reconstructed image is fed to the second stage of image decomposition. The NSST transformation technique decomposes the image into low and high-frequency components. Here, the high-frequency components are fused using the Optimized Deep Neural Network (ODNN), and the low-frequency components are fused with the help of averaging rule. The parameters in DTCWT and DNN are optimized by the Enhanced Marine Predator algorithm (EMPO). Finally, the two multi-modal medical images are fused. The simulations are made to evaluate the effectiveness of the developed TSMMIF model to improve image fusion quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call