Abstract

PurposeThe reconstruction performance of the deep image prior (DIP) approach is limited by the conventional convolutional layer structure and it is difficult to enhance its potential. In order to improve the quality of image reconstruction and suppress artifacts, we propose a DIP algorithm with better performance, and verify its superiority in the latest case. MethodsWe construct a new U-ConformerNet structure as the DIP algorithm's network, replacing the traditional convolutional layer-based U-net structure, and introduce the 'lpips' deep network based feature distance regularization method. Our algorithm can switch between supervised and unsupervised modes at will to meet different needs. ResultsThe reconstruction was performed on the low dose CT dataset (LoDoPaB). Our algorithm attained a PSNR of more than 35 dB under unsupervised conditions, and the PSNR under the supervised condition is greater than 36 dB. Both of which are better than the performance of the DIP-TV. Furthermore, the accuracy of this method is positively connected with the quality of the a priori image with the help of deep networks. In terms of noise eradication and artifact suppression, the DIP algorithm with U-ConformerNet structure outperforms the standard DIP method based on convolutional structure. ConclusionsIt is known by experimental verification that, in unsupervised mode, the algorithm improves the output PSNR by at least 2–3 dB when compared to the DIP-TV algorithm (proposed in 2020). In supervised mode, our algorithm approaches that of the state-of-the-art end-to-end deep learning algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.