Abstract

Cone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading artifacts that interfere with the diagnostic and treatment processes. The proposed method involves a two-stage generative adversarial network (GAN)-based image-to-image translation, where it operates on unpaired CBCT and multidetector computed tomography (MDCT) images. The first stage uses a generic GAN along with the fidelity difference between the original CBCT and MDCT-like images generated by the network. Although this approach is generally effective for denoising, at times, it introduces additional artifacts that appear as bone-like structures in the output images. This is because the weak input fidelity between the two imaging modalities can make it difficult to preserve the morphological structures from complex shadowing artifacts. The second stage of the proposed model addresses this problem. In this stage, paired training data, excluding inappropriate data, were collected from the results obtained in the first stage. Subsequently, the fidelity-embedded GAN was retrained using the selected paired samples. The results obtained in this study reveal that the proposed approach substantially reduces the shadowing and secondary artifacts arising from incorrect data fidelity while preserving the morphological structures of the original CBCT image. In addition, the corrected image obtained using the proposed method facilitates accurate bone segmentation compared to the original and corrected CBCT images obtained using the unpaired method.

Highlights

  • Dental cone-beam computed tomography (CBCT) is being increasingly used for diagnosis and treatment in implant, dental and maxillofacial surgery [2], [12]

  • This paper presents an unpaired CBCT-to-multidetector computed tomography (MDCT) translation method to alleviate photon scattering and truncation error, one of the main factors that degrade the quality of dental CBCT images

  • Fidelity-embedded generative adversarial network (GAN) approaches for unpaired learning have demonstrated promising results for denoising CT images, their performance has remained limited owing to the use of naive fidelity despite significant differences between CBCT and MDCT

Read more

Summary

Introduction

Dental cone-beam computed tomography (CBCT) is being increasingly used for diagnosis and treatment in implant, dental and maxillofacial surgery [2], [12]. Most dental CBCT devices are designed to reduce the radiation dose by limiting the scan field of view (FOV) [35]. They frequently have no pre- and post-patient collimation, which greatly reduces the amount of scattered radiation [8]. This suggests the possibility of developing an artifactcorrection function that maps CBCT images to MDCT-like images that contain negligible shading artifacts

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.