Abstract
This work presents a learning-based method to synthesize dual energy CT (DECT) images from conventional single energy CT (SECT). The proposed method uses a residual attention generative adversarial network. Residual blocks with attention gates were used to force the model to focus on the difference between DECT maps and SECT images. To evaluate the accuracy of the method, we retrospectively investigated 20 headand-neck cancer patients with both DECT and SECT scans available. The high and low energy CT images acquired from DECT acted as learning targets in the training process for SECT datasets and were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. The synthesized DECT images showed an average mean absolute error around 30 Hounsfield Unit (HU) across the wholebody volume. These results strongly indicate the high accuracy of synthesized DECT image by our machinelearning-based method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have