Abstract

Industrial computed tomography (CT) images reconstructed directly from projection data using the filtered back projection (FBP) method exhibit strong metal artifacts due to factors such as beam hardening, scatter, statistical noise, and deficiencies in the reconstruction algorithms. Traditional correction approaches, confined to either the projection domain or the image domain, fail to fully utilize the rich information embedded in the data. To leverage information from both domains, we propose a joint deep learning framework that integrates UNet and ResNet architectures for the correction of metal artifacts in CT images. Initially, the UNet network is employed to correct the imperfect projection data (sinograms), the output of which serves as the input for the CT image reconstruction unit. Subsequently, the reconstructed CT images are fed into the ResNet, with both networks undergoing a joint training process to optimize image quality. We take the projection data obtained by analytical simulation as the data set. The resulting optimized industrial CT images show a significant reduction in metal artifacts, with the average Peak Signal-to-Noise Ratio (PSNR) reaching 36.13 and the average Structural Similarity Index (SSIM) achieving 0.953. By conducting simultaneous correction in both the projection and image domains, our method effectively harnesses the complementary information from both, exhibiting a marked improvement in correction results over the deep learning-based single-domain corrections. The generalization capability of our proposed method is further verified in ablation experiments and multi-material phantom CT artifact correction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call