Abstract

Magnetic resonance imaging (MRI) and positron emission tomography (PET) image fusion is a recent hybrid modality used in several oncology applications. The MRI image shows the brain tissue anatomy and does not contain any functional information, while the PET image indicates the brain function and has a low spatial resolution. A perfect MRI–PET fusion method preserves the functional information of the PET image and adds spatial characteristics of the MRI image with the less possible spatial distortion. In this context, the authors propose an efficient MRI–PET image fusion approach based on non-subsampled shearlet transform (NSST) and simplified pulse-coupled neural network model (S-PCNN). First, the PET image is transformed to YIQ independent components. Then, the source registered MRI image and the Y -component of PET image are decomposed into low-frequency (LF) and high-frequency (HF) subbands using NSST. LF coefficients are fused using weight region standard deviation (SD) and local energy, while HF coefficients are combined based on S-PCCN which is motivated by an adaptive-linking strength coefficient. Finally, inverse NSST and inverse YIQ are applied to get the fused image. Experimental results demonstrate that the proposed method has a better performance than other current approaches in terms of fusion mutual information, entropy, SD, fusion quality, and spatial frequency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call