Abstract

Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.

Highlights

  • Various medical image modalities are available, including magnetic resonance imaging (MRI), computed tomography (CT), ultrasonography, magnetic resonance angiography (MRA), positron emission tomography (PET), single-photon emission CT (SPECT), and functional MRI [1]

  • We propose a novel multi-modal medical image fusion method based on simplified pulsecoupled neural network and quaternion wavelet transform

  • Mutual information (MI), proposed by Piella [24], indicates how much information the fused image conveys about the reference image

Read more

Summary

Introduction

Various medical image modalities are available, including magnetic resonance imaging (MRI), computed tomography (CT), ultrasonography, magnetic resonance angiography (MRA), positron emission tomography (PET), single-photon emission CT (SPECT), and functional MRI (fMRI) [1]. The quaternion wavelet transform (QWT) [10], proposed by Corrochano, is a new multiscale analysis tool for capturing the geometry features of an image. Fusion rules based on principal component analysis methods lead to pixel distortion in fused multi-modal medical images [6]. In [12], a visibility feature method was proposed to fuse the quaternion wavelet coefficient of source medical images. In [14], the weighted sum-modified Laplacian and maximum local energy were utilized to select secondgeneration contourlet transform coefficients These fusion rules produce high-quality images, they lead to loss of information and pixel distortion due to the nonlinear operations of fusion rules. After the MSD of QWT, the fusion rule based on the simplified PCNN is applied to high-frequency subbands. The experimental results demonstrate that the proposed fusion rule is more effective than these methods

Quaterion Wavelet Transform
Concepts of Quaternion Algebra
Quaternion Wavelet Transform
Pulse-Coupled Neural Network
General Medical Image Fusion Framework
Low-Frequency Subband Fusion Rule
High-Frequency Subband Fusion Rule
Mutual Information
Outline of Proposed Algorithm
Experimental Setup
Subjective Evaluation Analysis
Objective Evaluation Analysis
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call