Abstract

Although many powerful convolutional neural networks (CNN) have been applied to various image processing fields, due to the lack of datasets for network training and the significant different intensities of diverse multi-modal source images at the same location, CNN cannot be directly used for the field of medical image fusion (MIF), which is a major problem and limits the development of this field. In this article, a novel multimodal medical image fusion method based on non-subsampled contourlet transform (NSCT) and CNN is presented. The proposed algorithm not only solves this problem, but also exploits the advantages of both NSCT and CNN to obtain better fusion results. In the proposed algorithm, source multi-modality images are decomposed into low and high frequency subbands. For high frequency subbands, a new perceptual high frequency CNN (PHF-CNN), which is trained in the frequency domain, is designed as an adaptive fusion rule. In the matter of the low frequency subband, two result maps are adopted to generate the decision map. Finally, fused frequency subbands are integrated by the inverse NSCT. To verify the effectiveness of the proposed algorithm, ten state-of-the-art MIF algorithms are selected as comparative algorithms. Subjective evaluations by five doctors as well as objective evaluations by seven image quality metrics, demonstrate that the proposed algorithm is superior to the other comparative algorithms in terms of fusing multimodal medical images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call