Abstract

Multimodal medical image fusion is an auxiliary approach to help doctors diagnose diseases accurately leveraging information enhancement technology. Up to now, none of the fusion strategies is authoritative. Exploring methods with excellent performance is still the theme of image fusion works. The local extrema scheme (LES) and convolutional neural networks (CNNs) perform remarkable in medical image fusion tasks. However, the low decomposition efficiency of the LES and the limitations of CNNs should be addressed. Therefore, a novel framework proposed by combining the local extrema scheme and a Siamese network. This paper tried to solve the mentioned issues by improving the decomposition efficiency of LES and customizing the fusion strategy. Initially, the multi-scale local extrema scheme (MSLES) is introduced to decompose the source image into a series of detailed layers and a smoothed layer. Simultaneously, an adaptive dual-channel spiking cortical model (ADCSCM) based on the image information entropy (EN) is constructed to fuse the smoothed layer, and subsequently a feasible weight allocation strategy is designed by combining the Siamese network and EN to fuse the detailed layers. Ultimately, the informative image is reconstructed with the fused smoothed layer and detailed layers. By analyzing the extensive experimental results and metrics, the proposed framework achieves better performance against other state-of-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call