Objective.Among various deep-network-based sparse-view CT image reconstruction studies, the sinogram upscaling network has been predominantly employed to synthesize additional view information. However, the performance of the sinogram-based network is limited in terms of removing aliasing streak artifacts and recovering low-contrast small structures. In this study, we used a view-by-view back-projection (VVBP) tensor-domain network to overcome such limitations of the sinogram-based approaches.Approach.The proposed method offers advantages of addressing the aliasing artifacts directly in the 3D tensor domain over the 2D sinogram. In the tensor-domain network, the multi-planal anti-aliasing modules were used to remove artifacts within the coronal and sagittal tensor planes. In addition, the data-fidelity-based refinement module was also implemented to successively process output images of the tensor network to recover image sharpness and textures.Main result.The proposed method showed outperformance in terms of removing aliasing artifacts and recovering low-contrast details compared to other state-of-the-art sinogram-based networks. The performance was validated for both numerical and clinical projection data in a circular fan-beam CT configuration.Significance.We observed that view-by-view aliasing artifacts in sparse-view CT exhibit distinct patterns within the tensor planes, making them effectively removable in high-dimensional representations. Additionally, we demonstrated that the co-domain characteristics of tensor space processing offer higher generalization performance for aliasing artifact removal compared to conventional sinogram-domain processing.
Read full abstract