Abstract

AbstractMedical image fusion focuses to fuse complementary diagnostic details for better visualization of comprehensive information and interpretation of various diseases and its treatment planning. In this paper, a multistage multimodal fusion model is presented based on nonsubsampled shearlet transform (NSST), stationary wavelet transform (SWT), and feature‐adaptive pulse coupled neural network. Firstly, NSST is employed to decompose the source images into optimally sparse multi‐resolution components followed by SWT. Secondly, structural features are extracted by a weighted sum‐modified Laplacian and applied to an adaptive model to map feature weights for low‐band SWT component fusion, and texture feature‐based fusion rule is applied to fuse high‐band SWT components. High‐frequency NSST components are fused using the absolute maximum and sum of absolute difference based rule to retain complex directional details. Experimental results show that the proposed method obtains significantly better fused medical images compared to others with excellent visual quality and improved computational measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call