To propose a dual-domain CBCT reconstruction framework (DualSFR-Net) based on generative projection interpolation to reduce artifacts in sparse-view cone beam computed tomography (CBCT) reconstruction. The proposed method DualSFR-Net consists of a generative projection interpolation module, a domain transformation module, and an image restoration module. The generative projection interpolation module includes a sparse projection interpolation network (SPINet) based on a generative adversarial network and a full-view projection restoration network (FPRNet). SPINet performs projection interpolation to synthesize full-view projection data from the sparse-view projection data, while FPRNet further restores the synthesized full-view projection data. The domain transformation module introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes. The image restoration module includes an image restoration network FIRNet that fine-tunes the domain-transformed images to eliminate residual artifacts and noise. Validation experiments conducted on a dental CT dataset demonstrated that DualSFR-Net was capable to reconstruct high-quality CBCT images under sparse-view sampling protocols. Quantitatively, compared to the current best methods, the DualSFR-Net method improved the PSNR by 0.6615 and 0.7658 and increased the SSIM by 0.0053 and 0.0134 under 2-fold and 4-fold sparse protocols, respectively. The proposed generative projection interpolation-based dual-domain CBCT sparse-view reconstruction method can effectively reduce stripe artifacts to improve image quality and enables efficient joint training for dual-domain imaging networks in sparse-view CBCT reconstruction.
Read full abstract