Dual-energy computed tomography (DECT) is highly promising for material characterization and identification, whereas reconstructed material-specific images are affected by magnified noise and beam-hardening artifacts. Although various DECT material decomposition methods have been proposed to solve this problem, the quality of the decomposed images is still unsatisfactory, particularly in the image edges. In this study, a data-driven approach using dual interactive Wasserstein generative adversarial networks (DIWGAN) is developed to improve DECT decomposition accuracy and perform edge-preserving images. In proposed DIWGAN, two interactive generators are used to synthesize decomposed images of two basis materials by modeling the spatial and spectral correlations from input DECT reconstructed images, and the corresponding discriminators are employed to distinguish the difference between the generated images and labels. The DECT images reconstructed from high- and low-energy bins are sent to two generators separately, and each generator synthesizes one material-specific image, thereby ensuring the specificity of the network modeling. In addition, the information from different energy bins is exploited through the feature sharing of two generators. During decomposition model training, a hybrid loss function including L1 loss, edge loss, and adversarial loss is incorporated to preserve the texture and edges in the generated images. Additionally, a selector is employed to define the generator that should be trained in each iteration, which can ensure the modeling ability of two different generators and improve the material decomposition accuracy. The performance of the proposed method is evaluated using digital phantom, XCAT phantom, and real data from a mouse. On the digital phantom, the regions of bone and soft tissue are strictly and accurately separated using the trained decomposition model. The material densities in different bone and soft-tissue regions are near the ground truth, and the error of material densities is lower than 3mg/ml. The results from XCAT phantom show that the material-specific images generated by directed matrix inversion and iterative decomposition methods have severe noise and artifacts. Regarding to the learning-based methods, the decomposed images of fully convolutional network (FCN) and butterfly network (Butterfly-Net) still contain varying degrees of artifacts, while proposed DIWGAN can yield high quality images. Compared to Butterfly-Net, the root-mean-square error (RMSE) of soft-tissue images generated by the DIWGAN decreased by 0.01g/ml, whereas the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the soft-tissue images reached 31.43dB and 0.9987, respectively. The mass densities of the decomposed materials are nearest to the ground truth when using the DIWGAN method. The noise standard deviation of the decomposition images reduced by 69%, 60%, 33%, and 21% compared with direct matrix inversion, iterative decomposition, FCN, and Butterfly-Net, respectively. Furthermore, the performance of the mouse data indicates the potential of the proposed material decomposition method in real scanned data. A DECT material decomposition method based on deep learning is proposed, and the relationship between reconstructed and material-specific images is mapped by training the DIWGAN model. Results from both the simulation phantoms and real data demonstrate the advantages of this method in suppressing noise and beam-hardening artifacts.
Read full abstract