Abstract

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.