Abstract
Purpose: Our preliminary study showed us the capability of a deep learning neural network (DLNN) based method to eliminate a specific type of artifact in CT images. This work is to comprehensively study the applicability of a U-net CNN architecture in improving the image quality of CT reconstructions by respectively testing its performance in various artifact removal tasks. Methods: A U-net architecture is trained by a big dataset of contaminated and expected image pairs. The expected images known as reference images are acquired from groundtruths or using superior imaging system. A proper initialization of network parameters, a careful normalization of original data and a residual learning objective are incoorprated into the framework to boost training convergence. Both numerical and real data studies are conducted to validate this method. Results: In numerical studies, we found that the DLNN-based artifact reduction is powerful and can work well in reducing nearly all type artifacts and recover detailed structrual information in low-quliaty images (e.g. plain FBP reconstructions) if the network is trained with groundtruths provided. In real situations where groundtruth is not available, the proposed method can characterize the discrepancy between contaminated data and higher-quality reference labels produced by other techniques, mimicking their capability of reducing artifacts. Generalization to disjointed data is also examined using testing data. All results show that the DLNN framework can be applied to various artifact reduction tasks and outperforms conventional methods with shorter runtime. Conclusion: This work gained promising results of the U-net network architecture successfully characterizing both global and local artifact patterns. By forward propagating contaminated images through the trained network, undesired artifacts can be greatly reduced while structrual information maintained for an input of CT image. It should be noted that the proposed deep network should be trained independently for each specific case.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.