Abstract
The purpose of a convolutional neural network (CNN)-based denoiser is to increase the diagnostic accuracy of low-dose computed tomography (LDCT) imaging. To increase diagnostic accuracy, there is a need for a method that reflects the features related to diagnosis during the denoising process. To provide a training strategy for LDCT denoisers that relies more on diagnostic task-related features to improve diagnosticaccuracy. An attentive map derived from a lesion classifier (i.e., determining lesion-present or not) is created to represent the extent to which each pixel influences the decision by the lesion classifier. This is used as a weight to emphasize important parts of the image. The proposed training method consists of two steps. In the first one, the initial parameters of the CNN denoiser are trained using LDCT and normal-dose CT image pairs via supervised learning. In the second one, the learned parameters are readjusted using the attentive map to restore the fine details of theimage. Structural details and the contrast are better preserved in images generated by using the denoiser trained via the proposed method than in those generated by conventional denoisers. The proposed denoiser also yields higher lesion detectability and localization accuracy than conventional denoisers. A denoiser trained using the proposed method preserves the small structures and the contrast in the denoised images better than without it. Specifically, using the attentive map improves the lesion detectability and localization accuracy of thedenoiser.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.