Abstract

We propose to integrate multi-modality images and self-attention strategy into cycle-consistent adversarial networks (CycleGAN) to predict attenuation correction (AC) positron emission tomography (PET) image from non-AC (NAC) PET and MRI. During the training stage, deep features are extracted by 3D patch fashion from NAC PET and MRI images, and are automatically highlighted with the most informative features by self-attention strategy. Then, the deep features are mapped to the AC PET image by 3D CycleGAN. During the correction stage, the paired patches are extracted from a new arrival patient’s NAC PET and MRI images, and are fed into the trained networks to obtain the AC PET image. This proposed algorithm was evaluated using 18 patients’ datasets. Six-fold cross-validation was used to test the performance of the proposed method. The AC PET images generated with the proposed method show great resemblance with the reference AC PET images. The profile comparison also indicates the excellent matching between the reference and the proposed. The proposed method obtains the mean error ranging from -1.61% to 3.67% for all contoured volumes of interest. The whole-brain ME is less than 0.10%. These experimental studies demonstrate the clinical feasibility and accuracy of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call