The fusion of multimodal medical information is considered as an assisted approach for the medical professionals. Computed tomography and magnetic resonance (CT–MR) medical image fusion are able to help the radiologist in precise diagnosis of disease and deciding the required treatment in accord with the patient's condition. Therefore, a cascaded framework is proposed in this study that presents a fusion approach for multimodal medical information in ripplet transform (RT) and non-subsampled shearlet (NSST) domain. The RT and NSST having different features are utilised in a cascade manner that provides several directional decomposition coefficients and increases shift invariance information in the fused images. At the first stage decomposition, a biologically inspired neural model, motivated by novel sum-modified Laplacian and spatial frequency is utilised to fuse the low- and high-frequency coefficients, respectively, and the max fusion rule based on regional energy is utilised at stage 2. This model is used to preserve the redundant information also. The fusion performance is also validated by extensive simulations performed on different CT–MR image datasets of different diseases. Experimental results demonstrate that the proposed method provides better fused images in terms of visual quality along with the quantitative indices compared with several existing fusion approaches.
Read full abstract