Abstract
Large slice thickness or slice increment causes information insufficiency of Computed Tomography (CT) data in the longitudinal direction, which degrades the quality of CT-based diagnosis. Traditional approaches such as high-resolution computed tomography (HRCT) and linear interpolation can solve this problem. However, HRCT suffers from dose increase, and linear interpolation causes artifacts. In this study, we propose a deep-learning-based approach to reconstruct densely sliced CT from sparsely sliced CT data without any dose increase. The proposed method reconstructs CT images from neighboring slices using a U-net architecture. To prevent multiple reconstructed slices from influencing one another, we propose a parallel architecture in which multiple U-net architectures work independently. Moreover, for a specific organ (i.e., the liver), we propose a range-clip technique to improve reconstruction quality, which enhances the learning of CT values within this organ by enlarging the range of the training data. CT data from 130 patients were collected, with 80% used for training and the remaining 20% used for testing. Experiments showed that our parallel U-net architecture reduced the mean absolute error of CT values in the reconstructed slices by 22.05%, and also reduced the incidence of artifacts around the boundaries of target organs, compared with linear interpolation. Further improvements of 15.12%, 11.04%, 10.94%, and 10.63% were achieved for the liver, left kidney, right kidney, and stomach, respectively, using the proposed range-clip algorithm. Also, we compared the proposed architecture with original U-net method, and the experimental results demonstrated the superiority of our approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.