• An effective, fast and noise robust multimodal medical image fusion method is proposed. An image is modeled as a superposition of the structure layer and the energy layer decomposed by joint filter, then a novel local gradient energy operator is proposed to fuse the structure layer and the abs-max rule to fuse the energy layer. • 118 co-registered pairs of medical images cover five different categories of medical image fusion problems (MR-T1/MR-T2, CT/MR, MR-Gad/MR-T2, MR/PET, and MR/SPECT) are tested. To the best of our knowledge, this is the most comprehensive experiment both in quantity and classes for medical image fusion. • The merits of noniterative, local, and without multiscale decomposition or reconstruction make the proposed method easy to implementation and achieve high computational efficiency. • Our method can be effectively extended to other image fusion problems, such as multifocus image fusion, infrared and visible image fusion, millimeter-wave and visible image fusion, panchromatic and multispectral image fusion. As a powerful assistance technique for biomedical diagnosis, multimodal medical image fusion has emerged as a hot topic in recent years. Unfortunately, the trade-off among fusion performance, time consumption and noise robustness for many medical image fusion algorithms remains an enormous challenge. In this paper, an effective, fast and robust medical image fusion method is proposed. A two-layer decomposition scheme is introduced by the joint bilateral filter, the energy layer containing rich intensity information, and the structure layer capturing ample details. Then a novel local gradient energy operator based on the structure tensor and neighbor energy is proposed to fuse the structure layer and the l 1 -max rule is introduced to fuse the energy layer. A total of 118 co-registered pairs of medical images covering five different categories of medical image fusion problems are tested in experiments. Seven latest representative medical image fusion methods are compared, and six representative quality evaluation metrics with complementary characteristics are fully employed to objectively evaluate the fused results. Extensive experimental results demonstrate that the proposed method yields better performance than some state-of-the-art methods in both visual quality and quantitative evaluation, and achieves nearly real-time computational efficiency and robustness to noise.
Read full abstract