Abstract
Cone-beam computed tomography (CBCT) offers advantages over conventional fan-beam CT in that it requires a shorter time and less exposure to obtain images. However, CBCT images suffer from low soft-tissue contrast, noise, and artifacts compared to conventional fan-beam CT images. Therefore, it is essential to improve the image quality of CBCT. In this paper, we propose a synthetic approach to translate CBCT images with deep neural networks. Our method requires only unpaired and unaligned CBCT images and planning fan-beam CT (PlanCT) images for training. The CBCT images and PlanCT images may be obtained from other patients as long as they are acquired with the same scanner settings. Once trained, three-dimensionally reconstructed CBCT images can be directly translated into high-quality PlanCT-like images. We demonstrate the effectiveness of our method with images obtained from 20 prostate patients, and provide a statistical and visual comparison. The image quality of the translated images shows substantial improvement in voxel values, spatial uniformity, and artifact suppression compared to those of the original CBCT. The anatomical structures of the original CBCT images were also well preserved in the translated images. Our method produces visually PlanCT-like images from CBCT images while preserving anatomical structures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.