Abstract

Medical fusion image can integrate complementary information from multi-modal medical images, so that more comprehensive and accurate image results could be obtained. This paper proposes a new two-scale zero-learning medical images fusion method combined with pre-trained Res2Net and an adaptive guided filter. Firstly, this method utilizes the guided filter to decompose a medical image into a base layer representing large-scale intensity variations, and a detail layer containing small scale changes. Considering the influence of the parameters of the guided filter on the fusion images and the time-consuming on parameter selection, an adaptive guided filter based on the multi-modal medical image features is proposed. Then, detail layers are fused by an elementwise-sum strategy for retaining more detail information of source images. And the base layers are to fused with deep feature maps extracted from a pre-trained neural network Res2Net. Finally, the fused detail and basic layer will be reconstructed to obtain medical fusion image. The proposed method is demonstrated its superiority by ablation studies, and compared with 7 typical and advanced image fusion methods in visual effect and evaluation index. The experimental results show that the proposed image fusion method outperforms other methods in retaining effective detailed information and image clarity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call