Abstract

Aiming at the problem of insufficient detail retention in multimodal medical image fusion (MMIF) based on sparse representation (SR), an MMIF method based on density peak clustering and convolution sparse representation (CSR-DPC) is proposed. First, the base layer is obtained based on the registered input image by the averaging filter, and the original image minus the base layer to obtain the detail layer. Second, for retaining the details of the fused image, the detail layer image is fused by CSR to obtain the fused detail layer image, then the base layer image is segmented into several image blocks, and the blocks are clustered by using DPC to obtain some clusters, and each class cluster is trained to obtain a sub-dictionary, and all the sub-dictionaries are fused to obtain an adaptive dictionary. The sparse coefficient is fused through the learned adaptive dictionary, and the fused base layer image is obtained through reconstruction. Finally, fusing the detail layer and the base layer and reconstructing them forms the ultimate fused image. Experiments show that compared to the state-of-the-art two multi-scale transformation methods and five SR methods, the proposed method(CSR-DPC) outperforms the other methods in terms of the image details, the visual quality and the objective evaluation index, which can be helpful for clinical diagnosis and adjuvant treatment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call