Abstract

Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion.

Highlights

  • In recent decades, image fusion has played an essential role in the field of image processing [1]

  • The results using non-subsampled contourlet transform (NSCT)-Pulse-Coupled Neural Network (PCNN), non-subsampled shearlet transform (NSST)-SR, and NSST-PAPCNN shown in Figures 7(g)–7(i) preserved more bone structures of the computed tomography (CT) image, but they missed soft tissues of the magnetic resonance imaging (MRI) image compared with our method

  • The convolutional neural networks (CNNs) is trained to catch the initial weight from the source images

Read more

Summary

Introduction

Image fusion has played an essential role in the field of image processing [1]. The multiscale transform tools include Laplacian pyramid (LAP) [5], ratio of low-pass pyramid (RP) [6], dual-tree complex wavelet transform (DTCWT) [7], contourlet transform (CT) [8], and non-subsampled contourlet transform (NSCT) [9] Those fusion methods all consist of three steps: decomposition, fusion, and reconstruction. It introduces the CNNs to encode a direct mapping from the source image to weight map, which is the fusion framework of the low-frequency coefficients. CNNs consider the nonlinear features of images, while traditional pixel level methods fail to get high level features It can effectively filter redundant information through convolution and pooling layer [16].

Theoretical Basis
Fusion Strategies
Experiments
Methods
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call