Abstract
Supervised deep convolutional neural network (CNN)-based methods have been actively used in clinical CT to reduce image noise. The networks of these methods are typically trained using paired high- and low-quality data from a massive number of patients and/or phantom images. This training process is tedious, and the network trained under a given condition may not be generalizable to patient images acquired and reconstructed under different conditions. We propose a self-trained deep CNN (ST_CNN) method for noise reduction in CT that does not rely on pre-existing training datasets. The ST_CNN training was accomplished using extensive data augmentation in the projection domain, and the inference was applied to the data itself. Specifically, multiple independent noise insertions were applied to the original patient projection data to generate multiple realizations of low-quality projection data. Then, rotation augmentation was adopted for both the original and low-quality projection data by applying the rotation angle directly on the projection data so that images were rotated at arbitrary angles without introducing additional bias. A large number of paired low- and high-quality images from the same patient were reconstructed and paired for training the ST_CNN model. No significant difference was found between the ST_CNN and conventional CNN models in terms of the peak signal-to-noise ratio and structural similarity index measure. The ST_CNN model outperformed the conventional CNN model in terms of noise texture and homogeneity in liver parenchyma as well as better subjective visualization of liver lesions. The ST_CNN may sacrifice the sharpness of vessels slightly compared to the conventional CNN model but without affecting the visibility of peripheral vessels or diagnosis of vascular pathology. The proposed ST_CNN method trained from the data itself may achieve similar image quality in comparison with conventional deep CNN denoising methods pre-trained on external datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.