Abstract

Developing a deep-learning network for denoising low-dose CT (LDCT) images necessitates paired computed tomography (CT) images acquired at different dose levels. However, it is challenging to obtain these images from the samepatient. In this study, we introduce a novel approach to generate CT images at different doselevels. Our method involves the direct estimation of the quantum noise power spectrum (NPS) from patient CT images without the need for prior information. By modeling the anatomical NPS using a power-law function and estimating the quantum NPS from the measured NPS after removing the anatomical NPS, we create synthesized quantum noise by applying the estimated quantum NPS as a filter to random noise. By adding synthesized noise to CT images, synthesized CT images can be generated as if these are obtained at a lower dose. This leads to the generation of paired images at different dose levels for training denoisingnetworks. The proposed method accurately estimates the reference quantum NPS. The denoising network trained with paired data generated using synthesized quantum noise achieves denoising performance comparable to networks trained using Mayo Clinic data, as justified by the mean-squared-error (MSE), structural similarity index (SSIM)and peak signal-to-noise ratio (PSNR)scores. This approach offers a promising solution for LDCT image denoising network development without the need for multiple scans of the same patient at differentdoses.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.