Medical image fusion is important in the field of clinical diagnosis because it can improve the availability of information contained in images. Magnetic Resonance Imaging (MRI) provides excellent anatomical details as well as functional information on regional changes in physiology, hemodynamics, and tissue composition. In contrast, although the spatial resolution of Positron Emission Tomography (PET) provides is lower than that an MRI, PET is capable of depicting the tissue's molecular and pathological activities that are not available from MRI. Fusion of MRI and PET may allow us to combine the advantages of both imaging modalities and achieve more precise localization and characterization of abnormalities. Previous image fusion algorithms, based on the estimation theory, assume that all distortions follow Gaussian distribution and are therefore susceptible to the model mismatch problem. To overcome this mismatch problem, we propose a new image fusion method with multi-resolution and nonparametric density models (MRNDM). The RGB space registered from the source multi-modal medical images is first transformed into a generalized intensity-hue-saturation space (GIHS), and then is decomposed into the low- and high-frequency components using the non-subsampled contourlet transform (NSCT). Two different fusion rules, which are based on the nonparametric density model and the theory of variable-weight, are developed and used to fuse low- and high-frequency coefficients. The fused images are constructed by performing the inverse of the NSCT operation with all composite coefficients. Our experimental results demonstrate that the quality of images fused from PET and MRI brain images using our proposed method MRNDM is higher than that of those fused using six previous fusion methods.
Read full abstract