Abstract

Most multimodal medical image fusion (MMIF) methods suffer from insufficient complementary feature extraction and luminance degradation, such that the fused results cannot effectively assist in clinical diagnosis. Therefore, a novel end-to-end unsupervised learning fusion network is proposed to address these defects, termed a pair feature difference guided network (FDGNet). To adequately extract complementary features from source images, the MMIF task is modeled as feature-weighted guided learning, where the feature extraction framework is dedicated to calculating the difference among features at various levels, such that the feature reconstruction framework can generate a pair of interactive weights via the guidance of the feature differences to directly produce the fused result. Simultaneously, a hybrid loss composed of weighted fidelity loss and feature difference loss is introduced to effectively train the proposed network. Particularly, a weight estimation is designed in the weighted fidelity loss, resorting to a joint of enhanced saliency and pixel intensity of source images to prevent luminance degradation of the fused image. Extensive experiments on six category multimodal medical images demonstrate that FDGNet not only preserves rich luminance (CT), tissue texture (MRI), and functional (PET/SPECT) details from source images but also improves the quantitative metrics of the NMI, QABF, QY, and VIFP with about 68.68%, 6.73%, 12.52%, and 18.33%, respectively over the second-best algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call