CT image reconstruction from sparse-view projections is an important imaging configuration for low-dose CT, as it can reduce radiation dose. However, the CT images reconstructed from sparse-view projections by traditional analytic algorithms suffer from severe sparse artifacts. Therefore, it is of great value to develop advanced methods to suppress these artifacts. In this work, we aim to use a deep learning (DL)-based method to suppress sparse artifacts. Inspired by the good performance of DenseNet and Transformer architecture in computer vision tasks, we propose a Dense U-shaped Transformer (D-U-Transformer) to suppress sparse artifacts. This architecture exploits the advantages of densely connected convolutions in capturing local context and Transformer in modelling long-range dependencies, and applies channel attention to fusion features. Moreover, we design a dual-domain multi-loss function with learned weights for the optimization of the model to further improve image quality. Experimental results of our proposed D-U-Transformer yield performance improvements on the well-known Mayo Clinic LDCT dataset over several representative DL-based models in terms of artifact suppression and image feature preservation. Extensive internal ablation experiments demonstrate the effectiveness of the components in the proposed model for sparse-view computed tomography (SVCT) reconstruction. The proposed method can effectively suppress sparse artifacts and achieve high-precision SVCT reconstruction, thus promoting clinical CT scanning towards low-dose radiation and high-quality imaging. The findings of this work can be applied to denoising and artifact removal tasks in CT and other medical images.
Read full abstract