Abstract

A new robust adaptive fusion method for double-modality medical image PET/CT is proposed according to the Piella framework. The algorithm consists of the following three steps. Firstly, the registered PET and CT images are decomposed using the nonsubsampled contourlet transform (NSCT). Secondly, in order to highlight the lesions of the low-frequency image, low-frequency components are fused by pulse-coupled neural network (PCNN) that has a higher sensitivity to featured area with low intensities. With regard to high-frequency subbands, the Gauss random matrix is used for compression measurements, histogram distance between the every two corresponding subblocks of high coefficient is employed as match measure, and regional energy is used as activity measure. The fusion factor d is then calculated by using the match measure and the activity measure. The high-frequency measurement value is fused according to the fusion factor, and high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. Thirdly, the final image is acquired through the NSCT inverse transformation of the low-frequency fusion image and the reconstructed high-frequency fusion image. To validate the proposed algorithm, four comparative experiments were performed: comparative experiment with other image fusion algorithms, comparison of different activity measures, different match measures, and PET/CT fusion results of lung cancer (20 groups). The experimental results showed that the proposed algorithm could better retain and show the lesion information, and is superior to other fusion algorithms based on both the subjective and objective evaluations.

Highlights

  • The main purpose of medical image fusion is to generate a composite image by integrating the complementary information from multiple medical source images of the same scene [1]

  • We propose a self-adaptive fusion algorithm of PET/CT based on compressed sensing and histogram distance as described in the Piella framework

  • In order to verify the superiority of the proposed algorithm, the proposed algorithm was compared with other fusion methods, including the traditional pixel image fusion methods—maximum method, minimum method, and weighted average method; image fusion methods based on compressed sensing—compressed sensing image fusion based on wavelet transform (W-CS) and compressed sensing image fusion based on contourlet transform (CT-CS)

Read more

Summary

Introduction

The main purpose of medical image fusion is to generate a composite image by integrating the complementary information from multiple medical source images of the same scene [1]. Molecular images and anatomical images are integrated by PET/CT fusion; the fused image contains information on the pathophysiology of different modality images and improves the identifiability of the lesion areas. It provides images for the differential diagnosis of benign or malignant lesions and the detection rate of local spaceoccupying lesions and carries out whole body imaging in tumor exploration. The general framework of multiresolution image fusion method was firstly proposed by Zhang and Blum [4]. The other is the design of the fusion rule based on the Piella framework; the purposes of which are to explore how to construct the match measure and the activity measure by improving and optimizing the traditional fusion rule [14, 15]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call