Abstract

Medical image fusion aims to integrate complementary information from multimodal medical images and has been widely applied in the field of medicine, such as clinical diagnosis, pathology analysis, and healing examinations. For the fusion task, feature extraction is a crucial step. To obtain significant information embedded in medical images, many deep learning-based algorithms have been proposed recently and achieved good fusion results. However, most of them can hardly capture the independent and underlying features, which leads to unsatisfactory fusion results. To address these issues, a multibranch residual attention reconstruction network (MBRARN) is proposed for the medical image fusion task. The proposed network mainly consists of three parts: feature extraction, feature fusion, and feature reconstruction. Firstly, the input medical images are converted into three scales by image pyramid operation and then are input into three branches of the proposed network respectively. The purpose of this procedure is to capture the local detailed information and the global structural information. Then, convolutions with residual attention modules are designed, which can not only enhance the captured outstanding features, but also make the network converge fast and stably. Finally, feature fusion is performed with the designed fusion strategy. In this step, a new more effective fusion strategy is correspondently designed for MRI-SPECT based on the Euclidean norm, called feature distance ratio (FDR). The experimental results conducted on Harvard whole brain atlas dataset demonstrate that the proposed network can achieve better results in terms of both subjective and objective evaluation, compared with some state-of-the-art medical image fusion algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call