Abstract

AbstractThis paper investigates the application of unsupervised learning methods for computed tomography reconstruction. To motivate our work, we review several existing priors, namely the truncated Gaussian prior, the prior, the total variation prior, and the deep image prior (DIP). We find that DIP outperforms the other three priors in terms of representational capability and visual performance. However, the performance of DIP deteriorates when the number of iterations exceeds a certain threshold due to overfitting. To address this issue, we propose a novel method (MCDIP‐ADMM) based on multi‐code deep image prior (MCDIP) and plug‐and‐play alternative direction method of multipliers (ADMM). Specifically, MCDIP utilizes multiple latent codes to generate a series of feature maps at an intermediate layer within a generator model. These maps are then composed with trainable weights, representing the complete image prior. Experimental results demonstrate the superior performance of the proposed MCDIP‐ADMM compared to three existing competitors. In the case of parallel beam projection with Gaussian noise, MCDIP‐ADMM achieves an average improvement of 4.3 dB over DIP, 1.7 dB over ADMM DIP‐weighted total variation (WTV) and 1.2 dB over PnP‐DIP in terms of peak‐signal‐to‐noise ratio (PSNR). Similarly, for fan‐beam projection with Poisson noise, MCDIP‐ADMM achieves an average improvement of 3.09 dB over DIP, 1.86 dB over ADMM DIP‐WTV and 0.84 dB over PnP‐DIP in terms of PSNR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call