Abstract

In the field of medical imaging, the fusion of data from diverse modalities plays a pivotal role in advancing our understanding of pathological conditions. Sparse representation (SR), a robust signal modeling technique, has demonstrated noteworthy success in multi-dimensional (MD) medical image fusion. However, a fundamental limitation appearing in existing SR models is their lack of directionality, restricting their efficacy in extracting anatomical details from different imaging modalities. To tackle this issue, we propose a novel directional SR model, termed complex sparse representation (ComSR), specifically designed for medical image fusion. ComSR independently represents MD signals over directional dictionaries along specific directions, allowing precise analysis of intricate details of MD signals. Besides, current studies in medical image fusion mostly concentrate on addressing either 2D or 3D fusion problems. This work bridges this gap by proposing a MD medical image fusion method based on ComSR, presenting a unified framework for both 2D and 3D fusion tasks. Experimental results across six multi-modal medical image fusion tasks, involving 93 pairs of 2D source images and 20 pairs of 3D source images, substantiate the superiority of our proposed method over 11 state-of-the-art 2D fusion methods and 4 representative 3D fusion methods, in terms of both visual quality and objective evaluation. The source code of our fusion method is available at https://github.com/Imagefusions/imagefusions/tree/main.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call