This Data fusion has become a significant issue in diagnostic imaging, particularly in medical applications like radiation and guided image surgery. Medical image fusion aims to enhance the precision of tumor diagnosis, by preserving the salient information and characteristics of the original images in the fused image. It has been shown that guided filters are capable of maintaining edges well. In this paper, we propose a novel cross-guided filter-based fusion approach for multimodal medical images utilizing convolutional neural networks. The cross-guided filter is used in the proposed algorithm to extract the detailed features from the source images. Convolutional neural networks are used to generate the feature weights of source images derived from the detail layers. The weighted average rule is used to merge the source images based on these weights. We used thirty distinct types of medical images from diverse sources to compare the effectiveness of the proposed strategy to that of existing methods, both numerically and visually. The experimental findings demonstrated that, in terms of both objective evaluation and qualitative image quality, the suggested system performs better than other standard methods already in use. The quantitative results show that compared to existing methods under consideration for comparison, the proposed algorithm improves mutual information by 25%, image entropy by 9.5%, spatial frequency by 21%, standard deviation by 18.1%, structural similarity index by 30%, and edge strength of the fused image by 39%.
Read full abstract