Abstract

The dose of radiation a patient receives when undergoing dual-energy computed tomography (CT) is of significant concern to the medical community, and balancing the tradeoffs between the level of radiation used and the quality of CT images is challenging. This paper proposes a method of synthesizing high-energy CT (HECT) images from low-energy CT (LECT) images using a neural network that achieves an alternative to HECT scanning by employing an LECT scan, which greatly reduces the radiation dose a patient receives. In the training phase, the proposed structure cyclically generates HECT and LECT images to improve the accuracy of extracting edge and texture features. Specifically, we combine multiple connection methods with channel attention (CA) and pixel attention (PA) mechanisms to improve the network's mapping ability of image features. In the prediction phase, we use a model consisting of only the network component that synthesizes HECT images from LECT images. Our proposed method was conducted on clinical hip CT image data sets from Guizhou Provincial People's Hospital. In a comparison with other available methods [a generative adversarial network (GAN), a residual encoder-to-decoder network with a visual geometry group (VGG) pretrained model (RED-VGG), a Wasserstein GAN (WGAN), and CycleGAN] in terms of metrics of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), normalized mean square error (NMSE), and a visual effect evaluation, the proposed method was found to perform better on each of these evaluation criteria. Compared with the results produced by CycleGAN, the proposed method improved the PSNR by 2.44%, the SSIM by 1.71%, and the NMSE by 15.2%. Furthermore, the differences in the statistical indicators are statistically significant, proving the strength of the proposed method. The proposed method synthesizes high-energy CT images from low-energy CT images, which significantly reduces both the cost of treatment and the radiation dose received by patients. Based on both image quality score metrics and visual effects comparisons, the results of the proposed method are superior to those obtained by other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call